[10:24] <frankban> hi there gmb: have you red the email from Gary?
[10:26] <gmb> Hi frankban. Yes, I have. Want to pair on it?
[10:26] <frankban> gmb: sure
[10:27] <gmb> frankban, Cool. I just need  to finish doing the update/upgrade/reboot cycle and then I'll be ready. I'll ping you shortly.
[10:27] <frankban> ok gmb
[10:53] <gmb> frankban, I'm in https://plus.google.com/hangouts/extras/canonical.com/lxc-start-whut#
[10:57] <gmb> (export OTHER=gmb REMOTE=chinstrap.canonical.com; trap reset 0 1 2 3 15; stty raw -echo isig && ssh chinstrap.canonical.com "cat /tmp/$OTHER-term")
[11:07] <frankban> sudo mount -t overlayfs -oupperdir=$LXC_DIR/ephemeral-binding,lowerdir=$LXC_BIND none $LXC_DIR/rootfs$LXC_BIND
[11:35] <frankban> gmb:  sed -i '$ d' filename
[13:04]  * benji waits until after the call to apply updates... just in case
[13:08] <gary_poster> benji frankban gmb call in 2
[13:08] <gmb> ack
[13:43] <benji> I can see it now: all of our machines die and we have to resort to e-cannibalism.
[14:03] <gary_poster> heh
[14:15] <gary_poster> benji, I'm going to futz with bug 609986 while we are waiting on the ephemeral script changes.
[14:15] <_mup_> Bug #609986: layer setup failures don't output a failure message (they spew straight to console) <lp-foundations> <paralleltest> <Launchpad itself:Triaged> < https://launchpad.net/bugs/609986 >
[14:16] <gmb> frankban, I should be ready to pair again in a moment. Just need to reboot.
[14:16] <benji> gary_poster: do you want a spectator/heckler?
[14:16] <gary_poster> benji, sure :-)
[14:16] <gmb> benji, Inicdentally, we used your term-sharing-via-chinstrap thing this morning; worked very well, though it was disconcerting to not be able to see what frankban was seeing.
[14:17] <gmb> (i.e. I knew what I was sending, but not how it was ending up)
[14:17] <benji> gmb: cool
[14:18] <benji> gmb: something Gary has done when we were debugging the hack is to use the hangout screen sharing to then share the window back with me so I can see how it's ending up on his side
[14:19] <gmb> benji, Yes, that thought occurred to me, but the hangout was pretty sluggish (probably because of my VM(
[14:19] <gmb> )
[14:19] <gary_poster> benji, I'm in https://talkgadget.google.com/hangouts/extras/canonical.com/goldenhorde
[14:19] <benji> gmb (and everyone): earlier this morning I made a slight change to the terminal sharing page that should make it a little more reliable
[14:19] <gary_poster> oh, cool
[14:20] <benji> gary_poster: I'll be there in a moment.  I need to run out to the car to get my headset.
[14:20] <gary_poster> cool
[14:26]  * gmb reboots
[14:46] <gmb> frankban, I'm finally back up and running, and I'm in https://plus.google.com/hangouts/extras/canonical.com/lxc-start-whut
[14:47] <frankban> gmb: ok
[14:55] <frankban> gmb: sudo lxc-create -t ubuntu -n lp -f /etc/lxc/local.conf -- -r lucid -a amd64 -b graham
[15:07] <gmb> frankban, lp:~gmb/+junk/lxc-start-ephemeral.scratch/
[15:18] <gmb> frankban, sed -i '/graham/d' /tmp/fstab
[15:20] <gmb> frankban, sed -i '/'$USER'/d' $LXC_DIR/fstab
[15:49] <gmb> frankban, https://pastebin.canonical.com/62032/
[16:05] <gmb> benji, We have questions about lxc-start-ephemeral that you might be able to help us with. Are you free?
[16:05] <benji> gmb: sure
[16:06] <benji> gmb: are you hanging out?
[16:06] <gmb> benji, Cool, we're in https://plus.google.com/hangouts/extras/canonical.com/lxc-start-whut
[16:32] <benji> gary_poster: I'm done with them and am starving so I think I'll do lunch now unless you're in desperate need of heckling.
[16:32] <gary_poster> benji, :-) no, s'ok
[16:40]  * gary_poster about to babysit
[17:04] <gmb> gary_poster: Are you free for a brief chat about lxc-start-ephemeral and all its joys?
[17:04] <gmb> ]
[17:27]  * benji is back.
[17:45] <benji> gmb/frankban: did you guys get the second ephemeral issue figured out?
[17:45] <gmb> benji: yeah, it's a bug in lxc-wait.
[17:45] <benji> heh
[17:45] <gmb> Not our fault, and it's a problem we can hack around, probably.
[17:46] <gmb> Basically lxc-wait can only be used for one lxc instance at a time.
[17:46] <gmb> Because it always uses the same Unix socket.
[17:46] <benji> gmb: I guess you're about to EOD, in which case Gary and I can take over if you can transfer enough state to us.
[17:46] <gmb> benji: I've just emailed the list with a patch and details of the problem.
[17:46] <gmb> Everything else works.
[17:46] <benji> perfect
[17:47] <gmb> benji: There's only one place where lxc-wait is really an issue, and that's where it's used to block until the ephemeral instance stops.
[17:47] <gmb> lxc-start-ephemeral:204 in lp:~yellow/+junk/lxc-start-ephemeral
[17:48] <gmb> Because it just errors it means that the ephemeral instance starts and then (potentially) gets torn down again almost immediately.
[17:48] <gmb> Which I imagine would make testr a sad little panda.
[17:52] <gmb> Anyway, a relatively productive day.
[17:53] <gmb> (Said in much the same way as one would talk about a relatively productive cough)
[17:53] <gmb> (It's good that stuff's coming out, but no-one likes having to deal with it very much)
[17:59] <gary_poster> gmb, I am now, but I imagine this is your perfect time to escape, yeah?
[17:59] <gary_poster> heh
[18:00] <gary_poster> cool sounds good gmb & frankban.   Have a great weekend
[18:01] <frankban> you too gary_poster
[18:02] <benji> gary_poster: who shall pick up their banner?
[18:02] <gary_poster> benji, you definitely; me maybe.
[18:03] <benji> k
[18:03] <gary_poster> benji, I don't think the lxc-wait issue is a problem for our usage of lxc-start-ephemeral; do you agree?
[18:04] <gary_poster> benji, I will file a bug for this right now
[18:04] <benji> gary_poster: I don't /think/ so but I get the feeling that you have some deeper thought that I'm not following.
[18:06] <gary_poster> benji, the script only uses lxc-wait twice.  The second one is not an issue because it only comes into play if you do not pass a command to lxc-start-ephemeral.  We always do for the case of the parallel tests.
[18:07] <gary_poster> The first time we use lxc-wait, we are waiting on RUNNING, and as long as we proceed if that fails (as I am pretty sure we do) we will be ok because we just try to ssh in, and don't do anything until it is ready.
[18:07] <gary_poster> So neither instance is an issue for us AFAICT
[18:08] <benji> gary_poster: right
[18:10] <benji> gary_poster: if the g&f team fixed the overlayfs issue, then we should be ready to try running the tests again; right?
[18:10] <gary_poster> yes, benji.
[18:10] <benji> sweet
[18:42] <gary_poster> benji, call?  I would like to talk about the ephemeral changes
[18:43] <benji> gary_poster: sure, one second
[18:43] <gary_poster> cool, I'm in the horde
[19:33] <gary_poster> benji, I'm in https://talkgadget.google.com/hangouts/extras/canonical.com/goldenhorde but stepping away.  back in 5
[19:33] <benji> gary_poster: ok