[00:18] <michael_tn> good day all :-)
[02:31] <_mup_> juju/purge-queued-hooks r459 committed by jim.baker@canonical.com
[02:31] <_mup_> Handle merging events in case of purge
[02:34] <jimbaker> still need to clean that up, but the logic seems sound. dinner!
[03:14] <_mup_> juju/purge-queued-hooks r460 committed by jim.baker@canonical.com
[03:14] <_mup_> Cleanup
[03:15] <_mup_> juju/purge-queued-hooks r461 committed by jim.baker@canonical.com
[03:15] <_mup_> Merged trunk
[13:15] <niemeyer> gary_poster: ping
[13:17] <gary_poster> niemeyer, on call, will pong in 10 or so
[13:24] <niemeyer> gary_poster: No worries.. it was mostly to warn you about this:
[13:24] <niemeyer> ----- lp:~yellow/charms/oneiric/buildbot-slave/trunk
[13:24] <niemeyer> error: charm publishing previously failed: symlink "hooks/helpers.py" links out of charm: "../../buildbot-master/hooks/helpers.py"
[13:24] <niemeyer> gary_poster: This works out of a bug only
[13:24] <gary_poster> niemeyer, yeah, we are aware, thanks.  We will fix before we declare it "done"
[13:25] <niemeyer> gary_poster: Super
[14:21] <jcastro> SpamapS: FYI the webminar is 45 minutes, and I'll heavily mention that people should watch the first one before they do this one, so we can just jump right into the meat.
[14:31] <frankban> hi everybody, actually I am not able to bootstrap juju using ec2 environment. I've found http://pastebin.ubuntu.com/841728/ on the zookeeper instance. It seems that the problem can be us-east-1.ec2.archive.ubuntu.com returning a 403 error. Can you confirm my idea?
[16:28] <m_3> frankban: ec2 / us-east-1 is bootstrapping fine on this morning's oneiric packages (bzr454)... perhaps the problem was transient?
[16:29] <frankban> m_3: problem solved, thank you
[16:30] <m_3> np
[17:19] <hallyn> just curious, has anyone worked on a charm for diaspora pods?
[17:20] <hallyn> also, curious whether anyone is looking at bug 930430
[17:20] <_mup_> Bug #930430: lxc-ls requires root access after deploying an LXC instance <juju (Ubuntu):Confirmed> <lxc (Ubuntu):Confirmed> < https://launchpad.net/bugs/930430 >
[17:33] <m_3> hallyn: diaspora's on the list Bug 803538
[17:33] <_mup_> Bug #803538: Charm Needed: Diaspora <hot> <Juju Charms Collection:New> < https://launchpad.net/bugs/803538 >
[17:35] <m_3> it's a rocking one to get working though... great story for how individuals can put juju to good use... juju's not just for big stacks... etc etc
[19:27] <_mup_> Bug #932269 was filed: Juju should allow "service-destroyed" hooks <juju:New> < https://launchpad.net/bugs/932269 >
[20:03] <hallyn> m_3: aweseome
[20:03] <hallyn> i don't trust other people's pods :)
[20:05] <m_3> hallyn: :)
[20:49] <bac> hi m_3, i am still having the LXC+SSH problems i mentioned yesterday but today am using a PPA built on r454.  any ideas on how to diagnose this problem?
[20:50] <m_3> bac: hey... hmmm
[20:51] <m_3> so let's back up a sec... what series are you running on your host machine?
[20:51] <bac> precise
[20:51] <m_3> ok, have you disabled lxcbr in /etc/default/lxc?
[20:52] <bac> nope.  first i've heard of that
[20:53] <m_3> ok, just want to make sure we don't have any conflicts... want only one dnsmasq running, bound to 192.168.122.1, on interface virbr0
[20:53] <bac> m_3, the problem i'm seeing is that the .ssh directory for the ubuntu user is not being created
[20:53] <m_3> bac: right... I'm backing up further... never seen that problem and suspect it's something more basic
[20:53] <bac> m_3, i *can* ssh to the unit, but my authorized_keys are not there, so it prompts me for a password
[20:54] <m_3> what address is it getting?
[20:54] <bac> m_3, see the end of my master-customize.log file at http://pastebin.ubuntu.com/842188/
[20:55] <bac> virbr0 is 192.168.122.1
[20:56] <m_3> the lxc instance is picking up a 122 address?
[20:57] <bac> yeah, 122.113
[20:59] <m_3> so the next thing to check out would be your environments.yaml file
[20:59] <m_3> make sure you've got a `juju-origin: ppa` line as part of your local environment (whatever you named it)
[20:59] <bac> m_3, so i should set USE_LXC_BRIDGE="false" ?
[20:59] <m_3> bac: I would, yes
[21:00] <bac> yes, environments.yaml is good.  this set up was working last week.
[21:00] <m_3> then later when you restart lxc (might have to reboot) that interface and corresponding dnsmasq will go away
[21:00] <m_3> is the juju-origin in there?
[21:00] <m_3> (don't remember when that particular problem arose)
[21:01] <bac> yes, juju-origin: ppa
[21:01] <m_3> cool
[21:01] <m_3> ok, when was the last time you flushed your /var/cache/lxc?
[21:01] <m_3> BTW, does it have a 'precise' entry in there or is it still oneiric?
[21:01] <bac> yesterday, in response to this problem cropping up
[21:02] <bac> oneiric
[21:06] <m_3> bac: ok, so this is smelling a bit like a problem we'd be haunted with in lxc before
[21:06] <bac> m_3, just curious, did you look at the paste?
[21:07] <m_3> lxc instances sometimes couldn't mount filesystems (/proc is what I'm looking at here)
[21:07] <m_3> yeah, basing this guess on the paste
[21:07] <bac> ok
[21:07] <m_3> two possible things to try
[21:07] <m_3> 1.) wipe the cache again and see what happens
[21:07] <m_3> 2.) write your key into root's authrorized_keys in the container
[21:08] <m_3> you should be able to write directly to /var/lib/lxc/.../rootfs/root/.ssh/authorized_keys
[21:08] <bac> oh, great.  didn't know that
[21:08] <m_3> I _think_ the running instance should pick that up
[21:08] <m_3> once you're in we can debug further
[21:08] <bac> fwiw i've rebooted and the other dnsmasq is gone
[21:09] <m_3> cool
[21:09] <m_3> oh, if you've destroyed your environment, you might wanna flush the cache
[21:09] <bac> m_w you do find it odd that ~ubuntu exists?
[21:09] <bac> will do
[21:09] <bac> i mean *before* setup_users is called
[21:10] <m_3> don't know... it's copying a cached image right?
[21:10] <m_3> it might be set up in the image (I totally haven't dug into when/how this gets created)
[21:10] <bac> dunno
[21:11] <m_3> so a similar problem surfaced every few weeks with the instances not being able to mount devpts and so we couldn't get into the machine
[21:11] <bac> m_3: should i change default-series to precise?  does that work now?
[21:11] <m_3> it's on my list to test today... haven't yet
[21:12] <m_3> I'd leave it at oneiric atm
[21:12] <bac> ok, i've changed to precise for the container, blew away /var/cache/lxc, and am trying.  this should take a while
[21:12] <bac> oops
[21:12] <m_3> ah, precise is fine
[21:12] <m_3> just multiple moving parts
[21:13] <bac> m_3, btw, i used the juju recipe to build a precise PPA into my ppa.  it had failed two days ago due to a networking glitch but worked today.  perhaps you should request a build into the ~juju ppa now.
[21:14] <m_3> bac: ha
[21:14] <m_3> that's what I was just going to ask you... how'd you add the ppa on precise
[21:16] <m_3> that's going to be a problem if we use default-series set to precise with juju-origin set to ppa
[21:16] <m_3> perhaps it'd be best to use oneiric/ppa to test
[21:18] <bac> m_3, ok i will.  unless you think it is hopeless i'd like to let this one continue coming up
[21:19] <m_3> sure... might give us some more info
[21:20] <m_3> but what'll happen is your local juju-454 will spin up precise lxc instances that get juju-447 (which is hard-coded to only use oneiric)
[21:20] <m_3> so we should expect it to at least be confused :)
[21:51] <_mup_> juju/purge-queued-hooks r462 committed by jim.baker@canonical.com
[21:51] <_mup_> Docstrings/comments/PEP8/PyFlakes
[21:54] <bac> m_3, just a heads up that i'm still here and waiting on the lxc instance still
[21:55] <bac> m_3 and now it is up.  the problem is the same.
[21:56] <m_3> bac: ok, so manually inject your key and poke around
[21:57] <bac> m_3, well i am logged in via username/password.  is that sufficient?
[21:58] <bac> m_3, you mentioned the account possibly existing due to a cached copy.  where would that have come from given i blew away /var/cache/lxc?
[21:58] <m_3> bac: who are you logged in as?
[21:58] <bac> ubuntu
[21:58] <m_3> how'd you set a password?
[21:59] <bac> it is preset as ubuntu
[21:59] <m_3> ah, sorry
[21:59] <m_3> does the output of `mount` look normal?
[22:00] <bac> i'll paste it into a pm
[22:00] <m_3> looking for proc, devpts, etc
[22:03] <m_3> bac: catching up with you on a test box
[22:03] <m_3> bac: so can you deploy a service?
[22:04] <bac> yes
[22:04] <bac> but i cannot debug it
[22:04] <jamesmitchell> while I am working on a charm, can I do something to push the update out to a running juju environment? At the moment I am doing a 'destroy-environment', then 'bootstrap' and 'deploy' again.
[22:04] <bac> jamesmitchell: did you try upgrade-charm?
[22:05] <m_3> so on oneiric, I can't 'juju ssh 0'... I get Permission denied (publickey)
[22:05] <m_3> but once a service is up, I can 'juju ssh bitlbee/0' no problem
[22:06] <m_3> bac: so you're trying to get into a deployed service once it's up and running right?
[22:06] <bac> m_3: yes
[22:06] <bac> m_3: but this is what kills me:
[22:06] <bac> + setup_users
[22:06] <bac> + id ubuntu
[22:06] <bac> uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),999(admin),105(libvirtd)
[22:06] <bac> + '[' 0 == 0 ']'
[22:06] <bac> + return 0
[22:07] <bac> we don't know where that user got created
[22:08] <jamesmitchell> thanks. upgrade-charm is going to bump the revision as well?
[22:08] <bac> jamesmitchell: yes
[22:08] <m_3> bac: yeah, I'm getting the opposite...
[22:09] <m_3> E: Sub-process /usr/bin/dpkg returned an error code (1)
[22:09] <m_3> + setup_users
[22:09] <m_3> + id ubuntu
[22:09] <m_3> id: ubuntu: No such user
[22:09] <m_3> + '[' 1 == 0 ']'
[22:09] <m_3> + adduser ubuntu --disabled-password --shell /bin/bash --gecos ''
[22:11] <m_3> bac: ok, so the base image or lxc templating
[22:11]  * m_3 digging thru /var/{lib,cache}/lxc
[22:15] <m_3> bac: what's `grep ubuntu /var/cache/lxc/precise/rootfs-amd64/etc/passwd`
[22:15] <m_3> bac: and /var/lib/lxc/bac-local-0-template/config show?
[22:16] <bac> m_3, odd -- i specified precise in environments.yaml but it created an oneiric cache.
[22:16] <m_3> grrrr
[22:16] <m_3> 454?
[22:17] <bac> ii  juju                   0.5+bzr454-1juju2~prec next generation service orchestration system
[22:17] <bac> m_3, not ubuntu in cached /etc/passwd
[22:17] <m_3> there were several places in the code where args were defaulted to oneiric... perhaps one was missed
[22:18] <m_3> bac: dunno man... I'm thinking the short story is... still broken
[22:19] <bac> boo
[22:19] <m_3> I triggered a ppa build
[22:19] <bac> oh, good
[22:20] <m_3> and I bugged clint to update the archive... although that might wait until we've sorted this out
[22:20] <m_3> oh, BTW, while its' up... `dpkg -l | grep juju` on the instance
[22:20] <m_3> 447?
[22:20] <m_3> or 453
[22:21] <bac> ii  juju                       0.5+bzr454-1juju2~oneiric1 next generation service orchestration system
[22:22] <m_3> wow!
[22:22] <m_3> oh, nevermind
[22:22] <m_3> it's oneiric
[22:22] <m_3> the precise ppa build just failed again
[22:23] <m_3> I'll test off of head and see where I get... I think I can specify something like 'juju-origin: lp:juju'
[22:24] <m_3> this is in the way of testing charms on precise so we'll need it worked out sooner rather than later
[22:25] <m_3> bac: thanks... sorry we didn't get anywhere.  I'd recommend oneiric running oneiric lxc if you just need to get stuff done atm
[22:30] <bac> m_3: thanks for your help
[22:30] <bac> sadly i don't have an oneiric machine around
[22:31] <m_3> lxc works inside of an openstack or ec2 instance just fine
[22:31] <bac> oh, right
[22:31] <m_3> works inside of a libvirt VM too... just watch network conflicts
[22:32] <m_3> I use ec2/large so I can put /var/lib/lxc on a decent-sized tmpfs
[22:38] <m_3> ok, precise ppa build is good now
[22:39]  * m_3 going to wait in line at the florist