[00:34] <smoser> waigani, around ?
[00:34] <waigani> smoser: yep
[00:34] <smoser> can you answer my query in canonical irc ?
[00:35] <smoser> waigani, ^
[00:36] <waigani> smoser: you mean the #canonical channel? I don't see your query sorry?
[00:36] <smoser> canonical irc. (not freenode).
[09:06] <caribou> Is there a way to tell which serie to use when deploying to the local provider with "--to=kvm:0" ?
[09:06] <caribou>  I mean, my kvm instance is running trusty (like my host) but I'm deploying using precise charms
[09:42] <jam1> caribou: "juju deploy precise/charm --to kvm:0"
[09:42] <jam1> caribou: generally you define the series by the charm you are deploying
[09:43] <caribou> jam1: yeah, but the kvm instance still get built on trusty
[09:44] <caribou> jam1: in the nova-compute charm, it does a lsb_release to get the distro & sees trusty whereas the charm is for precise so it fails
[09:57] <jam> caribou: I've honestly only heard of this bug: https://bugs.launchpad.net/juju-core/+bug/1302820
[09:57] <_mup_> Bug #1302820: juju deploy --to lxc:0 cs:trusty/ubuntu creates precise container <landscape> <juju-core:Fix Committed by thumper> <https://launchpad.net/bugs/1302820>
[09:57] <jam> which should be the opposite of what you are seeing
[09:58] <caribou> jam: indeed; well I hacked around it by providing trusty/nova-compute and it works
[09:58] <thumper> same bug
[09:59] <thumper> fix isn't in a release
[09:59] <jam> thumper: ah that is that your creating LXC with the same lsb-release as the host
[09:59] <jam> he is on trusty and wants precise
[09:59] <jam> original bug was on precise and wanted trusty
[09:59] <jam> gotcha
[10:00] <jam> thumper: should we be backporting the fix to 1.18.1 ?
[10:00] <thumper> jam: probably
[10:00] <caribou> thumper: I can live with my workaround, I was just curious since that --to kvm:0 is rather new stuff (to me at least)
[10:00]  * thumper nods
[10:04] <caribou> right now, I'm stucked on something else I keep getting "Host '192.168.122.154' is not allowed to connect to this MySQL server" when I try to add a relation & both services are on the same machine
[10:04] <caribou> (mysql & keystone for now, but I get this with other services)
[10:08] <caribou> if mysql & keystone are on separate machines it works
[10:17] <thumper> caribou: you are using the local provider?
[10:17] <caribou> thumper: yes, but my colleague sees the same thing on a maas deployment
[10:17] <caribou> thumper: he's just opened https://bugs.launchpad.net/charms/+source/mysql/+bug/1305582
[10:17] <thumper> caribou: do you have the network-bridge in the local config set to 'virbr0' ?
[10:17] <_mup_> Bug #1305582: relation with mysql fail when mysql and glance are deployed on the same node <mysql (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1305582>
[10:17] <caribou> thumper: yes
[10:18] <caribou> thumper: but both my machines are LXC container, I only use kvm for the  nova-compute charm
[10:18] <thumper> ok
[12:11] <strikov> Hi guys. I'm trying to use juju with my own openstack cloud. Bootstrap finished correctly but when i'm trying to deploy mysql charm I'm getting this: "ERROR error uploading charm: cannot upload charm to provider storage: cannot make Swift control container: failed to create container: juju-123456789" But swift is accessible and bootstrap was able to create this container already and populated it with some files. What I'm doing wrong? Thanks
[13:19] <jose> lazyPower: ping
[13:19] <lazyPower> jose: pong
[13:19] <jose> lazyPower: looks like I fixed it!
[13:19] <jose> well, kinda
[13:19] <lazyPower> wooo!
[13:19]  * lazyPower dances
[13:20] <jose> the only problem we have is it doesn't migrate apps
[13:20] <jose> like, people would have to re-enable them manually
[13:21] <jose> but let's say, data is preserved, I think, the calendar app did preserve data
[13:29] <jose> lazyPower: ^
[13:29] <lazyPower> jose: ok. Did you update your MP yet?
[13:30] <lazyPower> s/yet//
[13:30] <lazyPower> I can pull it and take a look after hours today and see what the status is and look at how difficult the plugin migration would be
[13:32] <jose> ok, I'll update the MP asap
[13:32] <jose> the branch is updated
[14:07] <lazyPower> jose: no rush
[14:07] <lazyPower> i wont be able to get to it until after my workday
[14:53] <cory_fu> I'm getting an error that I don't understand at all when trying to run my amulet test: http://pastebin.ubuntu.com/7231112/ and my test, in case it's relevant: http://pastebin.ubuntu.com/7231117/
[14:54] <cory_fu> juju status just after shows only the machine and no indication of any units being created.  Unfortunately, since it's LXC, I can't get access to juju log at all
[14:56] <mbruzek> cory_fu, can you check ~/.juju/local/log/ for the logs
[14:56] <cory_fu> Well, apparently I can, and that's good to know
[14:56] <cory_fu> But it doesn't seem to have anything useful in it
[14:57] <mbruzek> cory_fu Can you patebin all-machines-log.0?
[14:58] <cory_fu> mbruzek: http://pastebin.ubuntu.com/7231138/
[15:01] <mbruzek> OK cory_fu I don't see anything in that log.  It looks like a deployer or amulet bug to me.  Does anyone else have insight on the error http://pastebin.ubuntu.com/7231112/
[15:03] <mbruzek> cory_fu, You can enable higher trace on your juju environment by running a command after bootstrap.  The command only turns up debugging for one boostrap/destroy-environment session, but it might help print out more details.
[15:03] <mbruzek> juju set-env 'logging-config=<root>=DEBUG;juju.provider=DEBUG'
[15:04] <marcoceppi> cory_fu: mbruzek https://bugs.launchpad.net/amulet/+bug/1293878
[15:04] <_mup_> Bug #1293878: Amulet should work with local charms that are not in version control <Amulet:Triaged by marcoceppi> <https://launchpad.net/bugs/1293878>
[15:05] <mbruzek> oh oops that is a bug I reported
[15:05] <cory_fu> Yep, that looks like it
[15:05] <cory_fu> Thanks
[15:05] <mbruzek> thanks marcoceppi
[15:05] <mbruzek> cory_fu, if I remember correctly the solution was to add the charm to bzr under my own namespace
[15:06] <marcoceppi> jsut being in bazaar will fix it, doesn't need to be pushed or anything
[15:06] <mbruzek> well a workround.
[15:06] <cory_fu> I guess I have to move my bzr learning phase up to before making sure the test passes.  :-p
[15:06] <marcoceppi> deployer expects bazaar, so in the future amulet will transparently move non versioned and other versioned charms to bzr
[15:07] <mbruzek> cory_fu if you have bzr questions send them to marc... me
[15:07] <cory_fu> :-)
[15:07] <mbruzek> cory_fu, It was really easy to add it to a personal branch
[15:10] <cory_fu> Hrm.  Do we need to make and upload new public keys for launchpad because of heartbleed?
[15:10] <cory_fu> Oh, no, that shouldn't affect ssh
[15:11] <cory_fu> Hrm.  Then why did bzr start giving me a publickey error
[15:13] <marcoceppi> cory_fu: just now?
[15:13] <lazyPower> cory_fu: positive that the current identity is uploaded to launchpad?
[15:14] <cory_fu> Well, I swear it was working yesterday, but just now I got a publickey error
[15:15] <lazyPower> cory_fu: and its yoru standard ~/.ssh/id_rsa right?
[15:16] <cory_fu> Well, it's a different one but I have a Host launchpad.net section in my .ssh/config
[15:16] <lazyPower> Ok, and your config is owned by your user, with proper permissions?
[15:17] <cory_fu> Yep
[15:17] <lazyPower> Interesting. that should be fine
[15:17] <cory_fu> Hrm.  Maybe I should generate a new pair.  :-(
[15:18] <lazyPower> You can also try removing the pubkey and resetting it
[15:18] <lazyPower> i doubt thats it but worth a shot
[15:23] <cory_fu> Ah.  I needed the full bazaar.launchpad.net on the Host line
[15:23] <cory_fu> Could swear it worked yesterday.  Maybe because I hadn't done bzr whoami yet
[15:24] <cory_fu> Or some other config to indicate which account to use
[15:25]  * lazyPower thumbs up
[15:26] <cory_fu>     from fastimport.helpers import (
[15:26] <cory_fu> ImportError: cannot import name single_plural
[15:27] <cory_fu> o_O
[16:47] <cory_fu> Is there any workaround for getting "ImportError: cannot import name single_plural" from bzr when trying to use fastimport to import from git?
[16:48] <cory_fu> mbruzek: ?
[17:08] <lazyPower> sarnold: ping
[17:09] <sarnold> hey lazyPower :)
[17:10] <lazyPower> So, you're a random number in the ether until you 'officially' arrive? as predeciated by your status update at 13:04:25 EST
[17:11] <sarnold> lazyPower: maybe several confounded random functions -- if my pandaboard hangs, then I arrive -- if my pandaboard doesn't hang, then I'm always here :)
[17:11] <lazyPower> i need to 1:1 with you at some point about building an in house low power maas cluster sinc eyou've got more micro board experience than I do
[17:12] <lazyPower> more along the lines of what to look at and what to stay away from
[17:12] <lazyPower> before i let that statement run away with context
[17:13] <sarnold> lazyPower: heh, i've just got the one pandaboard, and I think my experience with it is clear to find something else entirely :)
[17:13] <lazyPower> haha
[17:13] <lazyPower> Fair enough
[17:13] <sarnold> lazyPower: back when our buildds were using pandaboards, someone from IS had to poke them just about daily to keep them building :(
[17:15] <ppetraki> hey do we have a charm that just deploys a base line server? I swear we did
[17:16] <sarnold> ppetraki: was that 'ubuntu'?
[17:16] <lazyPower> ppetraki: its the ubuntu charm
[17:16] <lazyPower> deploys a no frills ubuntu server installation
[17:18] <ppetraki> sarnold, thanks
[17:22] <lazyPower> sarnold: also, last night i learned that you cannot colocate a juju-local installation on your maas region controller if you're using bridged ethernet devices without doing some serious voodoo in the juju config
[17:22] <lazyPower> it tanked networking on my server until i removed the juju-local package :P
[17:22] <sarnold> lazyPower: hah, yikes
[17:22] <lazyPower> yeah im like, poking around at really strange configurations
[17:22] <sarnold> lazyPower: somehow I'm not too surprised, the assumptions of all the different tools involved are pretty strong
[17:23] <lazyPower> i went from having my maas-master as a virtual machine to using bare metal as the region controller, controlling kvm instances.
[17:23] <lazyPower> this is all for a blog post about what configurations i found that work, for someone that wants to build an "all encompassing juju lab"
[17:24] <sarnold> oh nice
[17:24] <sarnold> some more 'real world' stories of maas and juju use would be pretty cool
[17:24] <lazyPower> well, i've got 3 special interest groups from CMU that contacted me about running juju workshops
[17:25] <sarnold> it's either "hey look at this wordpress install" or "yes we have customers with a few hundred or thousands of machines doing this on private clouds and they seem to like it"...
[17:25] <sarnold> sweet!
[17:25] <lazyPower> I'll try to get them to publish their experiences to the list so we can reblog and promote their use. The biggest use case i see is a LUG on campus is offering free VM's with openstack
[17:25] <lazyPower> they want to do the full maas + juju  + openstack path
[17:29] <sarnold> oooo
[17:29] <sarnold> man, when I was at school, we had one "linux lab" of machines that were castoffs from the windows labs but worked great for us..
[17:39] <ppetraki> can we support bundles from git hub?
[17:39] <ppetraki> err do we?
[17:42] <lazyPower> sarnold: make ingest faster
[17:43] <sarnold> lazyPower?
[17:43] <lazyPower> i'm waiting for a bundle i just published to ingest so i can deploy it using deployer... make ingest run faster!
[17:43] <lazyPower> i know you're secretly the wizard behind all of this
[17:44]  * sarnold waves his hands meaningfully
[17:44] <lazyPower> wooo juju genie powers
[17:47] <Kupo24z> Having issues with Ceilometer HTTP connection exception: [Errno 111] ECONNREFUSED with a fresh charm install, is there anything special that needs to be done to get keystone to authorize it?
[17:48] <jose> Kupo24z: have you exposed the service?
[17:48] <Kupo24z> jose: No, not yet
[17:48] <jose> Kupo24z: can you try exposing it?
[17:49] <Kupo24z> Same issue after exposing
[17:50] <lazyPower> Kupo24z: what environment are you running on?
[17:50] <lazyPower> and can you reach the instance with juju ssh <unit #>?
[17:51] <Kupo24z> Yes
[17:52] <Kupo24z> juju ssh ceilometer/0
[17:52] <timrc> So the jenkins charm on trusty listens on ipv6 only.  We can can add "-Djava.net.preferIPv4Stack=true" to JAVA_ARGS and restart the server to get it to listen on ipv4 but is this the preferred way of doing this?
[17:52] <timrc> restart the service*
[17:52] <Kupo24z> this is on ubuntu 12.04 LTS with openstack-origin: cloud:precise-havana
[17:53] <lazyPower> timrc: ideally it should be a boolean option in the config and the charm should handle adding that option
[17:53] <timrc> lazyPower, I can add that.  Sounds like a good idea
[17:54] <lazyPower> Kupo24z: ok, so it really is with just ceilometer, i'm not 100% familiar with the charm, but I would check the service to ensure its listening on the public address
[17:54] <lazyPower> i'm willing to bet its only exposed to the private address
[17:55] <lazyPower> actually
[17:55] <lazyPower> since you're ssh'd in, try to curl get the address on the private ip vs public ip
[17:55] <lazyPower> and see if it responds on one or the other
[17:55] <Kupo24z> this is the only thing (non -sshd) that its listening
[17:55] <Kupo24z> tcp        0      0 0.0.0.0:8777            0.0.0.0:*               LISTEN      8118/python
[17:56] <Kupo24z> PID 8118 is /usr/bin/python /usr/bin/ceilometer-api --log-dir=/var/log/ceilometer
[17:57] <lazyPower> Kupo24z: i've asked in #ubuntu-server, waiting on a response. I've not managed openstack outside of using the horizon dashboard
[20:03] <Kupo24z> Can someone assist with the ceph charm? are OSD devices physical disks or partitions to use for the ceph storage cluster?
[20:05] <lazyPower> Kupo24z: block storage devices.
[20:05] <lazyPower> eg: /dev/sda2
[20:05] <Kupo24z> Do they need to be unpartitioned disks or anything that is a traditional block device will work?
[20:06] <lazyPower> Kupo24z: i'm fairly certain it runs a format for you if it does not report as having a filesystem
[20:07] <Kupo24z> and im assuming all ceph nodes need to have the same block devies avalible if spawning multiple nodes since they all rely on the same config file
[20:21] <lazyPower> correct, unless you want to deploy named ceph nodes
[20:23] <mbruzek> Kupo24z, If you use a more recent source with the ceph charm you are able to use a directory as the storage device.
[20:25] <mbruzek> Kupo24z, I ran into a problem where the block devices were difficult to create, so I set the "source" configuration option to "cloud:precise-updates/havana" which gives you a more recent ceph
[20:27] <mbruzek> Kupo24z, With that version of ceph I was able to specify another config option 'osd-devices' to a non existing directory "/srv/osd/" and ceph created a block device there.
[20:31] <Kupo24z> mbruzek: wouldnt that create additional overhead if you are going through the filesystem for ceph storage?
[20:32] <Kupo24z> eg partition -> ext4 -> ceph vs partition -> ceph
[20:32] <mbruzek> yes but I thought you were asking how to create devices.  If you already have devices then you can safely ignore my comments.
[20:38] <lazyPower> mbruzek: thats good to know!
[20:38] <lazyPower> you told me that last week, however i forgot. WRiting that down.
[21:26] <seepa> Hello, I'm trying to deploy juju-gui on local-provider (all in a lxc), but when I expose the service, agent-stat-info returns: '(error: error executing "lxc-clone": lxc_container: failed mounting /var/lib/lxc/juju-precise-template/rootfs onto /var/lib/lxc/juju-precise-template/rootfs; lxc_container: Error copying storage; clone failed)'
[21:27] <seepa> I'm running juju 1.18-0-trusty-amd64
[21:28] <rick_h_> seepa: you shouldn't need to expose things in the lxc environments. They don't support it
[21:29] <seepa> rick_h_: oh, I see.
[21:34] <seepa> rick_h_: The error occurs after juju deploy juju-gui.
[21:36] <rick_h_> seepa: oh hmm, yea looks like your error is with creating the lxc machine. I'm not sure on that end
[21:40] <seepa> how can juju even mount /var/lib/lxc/juju-precise-template/rootfs? /var/lib/lxc is owned by root rwx------ ...
[21:41] <seepa> well it can't, but tries to mount it
[21:57] <Kupo24z> Seems I cannot destroy a service if the relation is still open. Ive got Ceph as 'life: dying' however it just hangs there, probably because of an existing relation
[21:57] <Kupo24z> however when I try to remove the relation nothing happens, is there a force?
[22:24] <jose> Kupo24z: what's the state of the service?
[22:28] <Kupo24z> jose: http://pastebin.ubuntu.com/7232718/
[23:05] <lazyPower> Kupo24z: show me the full output of your status
[23:05] <lazyPower> if you haven't resolved it
[23:06] <Kupo24z> lazyPower: I just destroyed the environment and started over
[23:07] <lazyPower> Ok. If a dependent service that it's related to is in an error state
[23:07] <Kupo24z> Nothing was in error state
[23:07] <lazyPower> that error will need to be resolved before that service will continue being destroyed
[23:07] <Kupo24z> It wasnt removing a relation for some reason, no errors at all on juju status
[23:08] <lazyPower> Ok, if you run into it again ship me the full output listing from juju status and we can investigate from there
[23:08] <lazyPower> like, ping me. I'm half in half out tonight on IRC.
[23:08] <lazyPower> jose: did you get that branch pushed?
[23:38] <jose> lazyPower: yep, and MP updated
[23:38] <lazyPower> jose: awesome. I'll take a look after i eat dinner.
[23:38] <jose> enjoy!
[23:38] <lazyPower> thanks for the quick turn around and effort on that
[23:38] <jose> I hope it's something that actually works and follows the charm store policy
[23:39] <lazyPower> If it needs doctoring I'll be happy to doctor it up and submit a MP to your branch
[23:39] <lazyPower> then we'll poke matt or marco to take a look as confirmation
[23:39] <jose> awesome then :)