[01:17] <slangasek> so I'm wondering if we should revert the whoopsie NM integration
[01:17] <infinity> slangasek: I've only been half-heartedly following the whoopsie drama.
[01:18] <infinity> slangasek: Is it pretty much past the point of "we should just effin' fix it" and deep down some "ngh, don't know how" rabbit hole?
[01:18] <infinity> slangasek: If so, reverting seems sane.
[01:18] <slangasek> it's clearly still not working right, as it's causing 20s boot speed regressions on some (but not all) systems
[01:18] <infinity> Oh, so it's basically mountall, part 2. :P
[01:19] <infinity> (I still get that /tmp thing, you need to step me through some debugging/logging sometime)
[01:19] <infinity> Of course, when we try to debug it, it won't happen, just to spite me.
[01:19] <slangasek> there's a workaround in place for ubiquity-dm, but that basically stops whoopsie from running at all in the live env, which isn't what we want either
[01:19] <infinity> Right.
[01:20] <slangasek> infinity: do you have a particularly full /tmp, which is a subdir of / rather than a separate mount point?
[01:20] <infinity> Is there a clear "this version was fine", and no rdep issues with a straight revert?  If so, let's just do it.
[01:20] <infinity> slangasek: My /tmp isn't a tmpfs, but my / isn't remotely full.
[01:20] <slangasek> infinity: I think you have the same mountall issue as xnox, which we've tracked down to "wrong message presented when the system is actually busy cleaning /tmp"
[01:20] <slangasek> full as in "full of stuff", not "out of disk"
[01:21] <infinity> slangasek: And it's not ever all that full.  Maybe a few unpacked trees from aborted debdiffs here and there.  But then it does the "waiting 30s" thing, which is excessive, if the cleaning only takes 5.
[01:21] <slangasek> infinity: there definitely is a "this version was fine" whoopsie - though given that there are other bug fixes intertwined, I'm inclined to revert just the libnm part
[01:21] <slangasek> waiting 30s?
[01:21] <slangasek> I don't know anything about that one
[01:21] <infinity> slangasek: If the feature revert is clear and obvious to you, go for it. :)
[01:22] <infinity> slangasek: As for mountall, yeah, it seems to just go into a timeout loop.  But this is all unscientific, I've not bootcharted or logged in any meaningful way, just watched it sit there for a $very_long_time that feels like an artificial delay.
[01:23] <slangasek> hmm
[01:23] <infinity> Cause if it was just waiting on an rm -rf, I'd expect it to flash the message for a few seconds, then carry on.
[01:23] <infinity> And it's much, much longer than that.
[01:24] <slangasek> infinity: well, you could try building from lp:ubuntu/mountall and see if the message goes away; that's the fix for xnox's bug
[01:24] <slangasek> (committed, not yet uploaded)
[01:24] <infinity> Sure.  I'll have to do a few reboots here first and make sure it's still reproducible, and see how often.
[01:24] <infinity> So I have some data going into the test.
[01:24] <infinity> I reboot, like, once a month, so my data's a bit suspect.
[01:25] <infinity> (Uptime on laptops has become ridiculous since we actually started suspending and resuming properly...)
[01:28] <infinity> slangasek: Anyhow, sorry to sidetrack.  If the feature revert is clean, clear, and obvious, JFDI, IMO.  If not, a full version revert might be sane to at least have things not borked over the weekend, and it can be revisited on Monday.
[01:29] <slangasek> ack
[01:30] <slangasek> I'm having a quick scan over the whole diff to make sure it is a severable change
[01:33] <infinity> slangasek: Before I go deep into trying to sort out this mountall business, it wouldn't be weirdly confused by this bit in my fstab, would it?
[01:33] <infinity> schroot        /var/lib/schroot/union/overlay/            tmpfs   size=75%          0       0
[01:34] <infinity> Like, it doesn't seem tmpfses and just have a crazy?
[01:34] <slangasek> it doesn't just have a crazy
[01:34] <infinity> s/seem/see/
[01:34] <infinity> Check. :)
[01:34] <slangasek> it does automount them, and consider the "virtual filesystem" stage not done until they're all mounted
[09:11] <cjwatson> fixed cdimage.germinate, rebuilding the images that failed due to that
[09:24] <ogra_> heh
[09:24]  * ogra_ just discovered it 
[10:10] <cjwatson> ogra_: cdimage is up to nearly 3000 lines of Python now (and another >3000 of tests); it'd have been surprising if none of it was broken ...
[10:10] <cjwatson> I think I'm ready to tackle build-image-set next