[00:51] <lazyPower> bradm: apologies. It was late when I lent a hand to that venture and had not looked at the bug tracker.
[00:52] <lazyPower> I'll leave a note to investigate first thing Tuesday morning. Just hopped on to see the ping :) Happy holiday o/
[04:50] <AskUbuntu> MaaS + JuJu on a single PC, MaaS as host for VM Nodes (LXC + KVM) | http://askubuntu.com/q/472230
[07:39] <cjohnston> wallyworld_: ping
[07:39] <wallyworld_> hi
[07:44] <wallyworld_> cjohnston: what can i do for you?
[07:46] <cjohnston> wallyworld_: could you please look at bug #1319947 and tell me what else you may need
[07:46] <_mup_> Bug #1319947: LXC local provider fails to provision precise instances from a trusty host - take 2 <juju-core:Confirmed> <https://launchpad.net/bugs/1319947>
[07:47] <wallyworld_> cjohnston: the cloud init log from the lxc container would be useful
[07:48] <cjohnston> wallyworld_: meaning like machine-1 or somehting?
[07:49] <wallyworld_> cjohnston: /var/log/cloud-init.log or similar
[07:49] <wallyworld_> from the container
[07:50] <cjohnston> wallyworld_: the containers are stuck in pending
[07:50] <wallyworld_> cjohnston: you should be able to ssh into them
[07:50] <cjohnston> wallyworld_: http://paste.ubuntu.com/7519544/
[07:51] <wallyworld_> cjohnston: pending normally means that the containers have come up but the juju processes inside haven't and so the phone home ping doesn't happen
[07:51] <wallyworld_> but you should be able to ssh in
[07:51] <cjohnston> juju ssh 1
[07:51] <cjohnston> ERROR machine "1" has no public address
[07:52] <wallyworld_> how about lxc-ls --fancy and trying to ssh into the ip address?
[07:53] <cjohnston> wallyworld_: no output
[07:53] <wallyworld_> does lxc-ls --fancy say the containers have started
[07:53] <wallyworld_> oh, maybe not
[07:53] <wallyworld_> that implies perhaps that the precise image was not downloaded
[07:53] <wallyworld_> what's the contents of /var/lib/lxc
[07:54] <cjohnston> jenkins  juju-precise-template  lpdev
[07:55] <wallyworld_> and juju-precise-template contains stuff like config, rootfs etc?
[07:56] <wallyworld_> is there any info in the lxc logs on the host to indicate why the container could not be started?
[07:58] <cjohnston> config  fstab  rootfs
[07:58] <wallyworld_> cjohnston: what version of lxc do you have installed?
[07:59] <wallyworld_> dpkg -s lxc ?
[07:59] <cjohnston> wallyworld_: would I be looking in something like lxc/lxc-monitord.log ?
[07:59] <cjohnston> Installed: 1.0.3-0ubuntu3
[07:59] <wallyworld_> maybe, not sure tbh
[07:59] <wallyworld_> that's the correct version i think
[08:00] <wallyworld_> cjohnston: is this your own machine or a cloud instance you are using?
[08:00] <cjohnston> wallyworld_: http://paste.ubuntu.com/7519597/
[08:00] <cjohnston> my desktop
[08:00] <wallyworld_> if it were a cloud instance we could perhaps ssh in to poke around
[08:01] <wallyworld_> you have juju-local installed?
[08:01] <cjohnston> juju-local:
[08:01] <cjohnston>   Installed: 1.18.3-0ubuntu1~14.04.1~juju1
[08:02] <wallyworld_> hmmm, i'm running out of ideas
[08:03] <wallyworld_> cjohnston: out of interest, does starting a trusty container work?
[08:04] <wallyworld_> ie set default-series: trusty in env config?
[08:04] <cjohnston> it did, I haven't tried it since 18.3
[08:07] <wallyworld_> hmm. so just precise
[08:08] <wallyworld_> cjohnston: what you can do is type in by hand the lxc commands that juju would use to start a precise container to see where any error might be
[08:09] <wallyworld_> juju just shells out and calls lxc-create etc
[08:09] <cjohnston> ack.. I'll see if I can find time for that while I'm sprinting
[08:09] <cjohnston> coffee break.. ping me later if you get time.. thanks wallyworld_
[08:09] <wallyworld_> cjohnston: if you could, that would be great, since i'm out of obvious suggstions
[08:09] <wallyworld_> ok will do
[14:53] <sebas5384> hello charmers and juju'ers :)
[15:08] <sebas5384> awesome charm!! http://manage.jujucharms.com/~gnuoy/precise/content-fetcher
[15:16] <sarnold> wow looks neat :)
[15:17] <sebas5384> sarnold: yeah but doesn't have git clone :(
[15:18] <sarnold> sebas5384: it feels like something it'll grow eventually :)
[15:18] <sarnold> I was hoping to see something for .gpg or sha256 sum validation, but again, hopefully something it'll grow
[15:19] <sebas5384> yeah will see
[15:19] <mmelo> Hi guys , i need a little help.
[15:19] <mmelo> In my local environment after changed dhcp.conf configurations to force specific ip address to the charms juju dont refresh this new addresses to the charms.
[15:20] <sebas5384> mmelo: know nothing about dhcp :(
[15:20] <sarnold> mmelo: I'm no wizard here but that feels like one of the problems easiest solved with destroying and rebuilding the environment
[15:21] <mmelo> i lost my relation between wordpress and mysql for example, i try to recreate relation but does not work
[17:28] <hackedbellini> hi guys. So, some of my lxc changed ips, but the relation is still using the old one
[17:28] <hackedbellini> In the logs, I didn't see any call for 'db-relation-changed' after the ip changed. Is there a way that I can manually force it?
[17:28] <hackedbellini> note that I cannot destroy any unit/service/relation/environment as I'm on a production server
[17:48] <avoine> hackedbellini: I don't think Juju was planned for that. Can you trigger a relation-set from other hooks? like start, stop, upgrade, etc
[17:53] <hackedbellini> avoine: I tried triggering the upgrade hook by forcing it with the '--force' flag. The config file that contains the db ip is being overridden (I tried changing by hand but it changes when a hook is called)
[17:53] <hackedbellini> but for some reason, although juju knows the new ip of the db, it is not getting it right on the 'relation-get'. The most strange part comes now: I have 2 mediawiki installations (same charm but deployed with different aliases) connected to the same mysql unit. When mysql unit changed ip, the change was propagated to one wiki, but not to the other
[17:54] <hackedbellini> I tried restarting juju-agent just now.... I saw in the logs that _a lot_ happened in the units. But the same is happening... one wiki has the new mysql ip and the other don't
[17:55] <sebas5384> constraints on juju-local is working?
[17:57] <avoine> hackedbellini: strange
[17:58] <avoine> hackedbellini: maybe this could help you: https://pypi.python.org/pypi/juju-dbinspect
[17:59] <hackedbellini> well, as someone just said to me, the mediawiki strange problem where one unit "changed" the ip and the other don't might have another reason. It seems a workmate here changed it by hand
[17:59] <hackedbellini> but well, for some other charms, the realtion is still reporting the old ip
[17:59] <hackedbellini> avoine: thanks! I'll take a look
[18:09] <sebas5384> constraints with juju-local isn't working or, im doing something wrong hehe
[18:09] <sebas5384> juju set-constraints --service drupal mem=2G cpu-cores=2
[18:16] <hackedbellini> avoine: I need the system administrator to install some packages for me so I'm able to use the tool you suggested
[18:16] <hackedbellini> in the mean time: I did some testing and I'm almost sure now. The lxc ip changed, but the "provider" of the db relation (mysql, postgresql, etc) is not calling 'relation-set' again
[18:16] <hackedbellini> On  the other charm, 'relation-get' is returning the old ip. So, if I made mysql/postgresql run relation-set again, it would probably make things right. Do you know how can I do that by hand?
[18:17] <avoine> hackedbellini: I think not, well not directly
[18:27] <avoine> hackedbellini: let me check I think I have an other way
[18:33] <hackedbellini> avoine: http://pastebin.ubuntu.com/7523369/
[18:33] <hackedbellini> this is what is happening when I try to open a shell on juju-db
[18:33] <hackedbellini> it tries to do a 'cat /var/lib/juju/agents/machine-0/agent.conf', but that file doesn't exist
[18:33] <hackedbellini> it exists actually, but not in that path
[18:33] <avoine> hackedbellini: that might be risky but you could check the relation cache under: /var/lib/juju/agents/unit-mediawiki-0/state/relations/
[18:34] <hackedbellini> the right path would be /var/lib/juju/.juju/local/agents/machine-0/agent.conf
[18:34] <avoine> ouch
[18:38] <avoine> that's a bug
[18:38] <hackedbellini> avoine: hrm, I can't find any unit-xxx directory
[18:38] <hackedbellini> I tried: find . -type d -name "*unit*" but it didn't find anything
[18:38] <hackedbellini> where more could this be?
[18:38] <avoine> hackedbellini: this is on your mediawiki machine
[18:38] <hackedbellini> I mean, that relations cache direcotory you said
[18:38] <hackedbellini> ahhh, ok
[18:40] <hackedbellini> avoine: found it. But this is the only information present there: http://pastebin.ubuntu.com/7523432/
[18:41] <avoine> hum, not very usefull
[18:44] <hackedbellini> avoine: yes :(
[18:44] <hackedbellini> I'm totally lost here
[18:54] <hackedbellini> avoine: is there anything I can do with the juju-db utility so I can connect to mongo?
[18:55] <avoine> hackedbellini: maybe changing the wrong path hard coded
[18:55] <avoine> but you wont be able to modify the database
[18:56] <avoine> only inspect it
[18:56] <avoine> so no very usefull in your case
[18:56] <hackedbellini> avoine: hrmmm, didn't know that
[18:58] <avoine> I thought there was a cache somewhere that you could change but there is not
[19:00] <avoine> hackedbellini: you could comment here to ask for a raise in the priority of this: https://bugs.launchpad.net/juju-core/+bug/1256053
[19:04] <hackedbellini> avoine: very nice! I'll surely add me to the "affected" people and see if I can make a useful comment there. Thank you!
[22:06] <AskUbuntu> Juju Ceilometer-Agent is not reporting about nova-compute | http://askubuntu.com/q/472646
[23:33] <bradm> lazyPower: no need to apologies, I was glad of the help :)  Plus, some of us didn't get the holiday, I'm in a slightly different timezone.