[11:25] <tai271828> hi, I note 14.04.3 iso is not in old-releases url http://old-releases.ubuntu.com/releases/trusty/
[11:25] <tai271828> and still in http://releases.ubuntu.com/14.04/
[11:26] <tai271828> may someone fix this url link?? thanks
[11:36] <apw> tai271828, thanks for the report ... i am sure someone will sort that out by-and-by
[11:38] <tai271828> apw, cool, many thanks!
[12:11] <knome> could somebody bump the xubuntu ISO size limit up to 1.46GB (the capacity of an 8cm single-sided DVD, if anybody decides to use one)?
[12:40] <cjwatson> knome: do you have a source for the capacity, preferably one that gives it in bytes?
[12:45]  * ogra_ sighs ... who is sitting on the arm64 builders today ... they seem to operate at half speed (rootfs buulds that took 30min yesterday take 1h today)
[12:45] <cjwatson> knome: hm, looks like the real specs are paywalled, I guess I'll go with 1.46 * 10^9
[12:47] <cjwatson> ogra_: you got a bare-metal builder yesterday by chance.  mostly the ones that are scalingstack instances are just fine and often faster, but perhaps livefs builds are a special case due to the amount of I/O involved
[12:47] <ogra_> ah, k
[12:48] <cjwatson> I'm afraid we're likely to be decommissioning those bare-metal ones in the near future; we can't do sandboxing there and we're unlikely to ever be able to get more of them
[12:49] <cjwatson> scalingstack is the future even if there are certain cases that go slower
[12:49] <ogra_> i know the build will fail, but was hesitating to hit the cancel button since that also takes 5 min to make it time out anyway and the build was already 30min in ...
[12:49] <cjwatson> if it's having to time out, then something is wrong
[12:49] <ogra_> well, with tegh plans to do one image build per snappy commit a build time of 1h+ might kind of kill that idea again
[12:50] <ogra_> *with the
[12:50] <cjwatson> maybe it's a bad idea :)
[12:50] <cjwatson> but meh, it doesn't bother us from the build farm perspective
[12:50] <ogra_> heh, well, but from a queue perspective perhaps ... after 100 builds piled up there :)
[12:51] <cjwatson> given 36 builders (once they're all consolidated, surely not
[12:51] <cjwatson> s/,/),/
[12:51] <ogra_> yay, finally failed ...
[12:51] <cjwatson> but of course it depends on how frequent commits are
[12:51]  * ogra_ re-kicks
[12:52] <cjwatson> a good strategy is often to try to queue a build on each commit but never have more than one build queued
[12:52] <ogra_> well, $mgmt asked for that, i suggested to go for 1/day for now but with keeping the 1/commit in mond for the future
[12:52] <ogra_> *mind
[12:53] <cjwatson> given that it will use the current versions of things at the point when the build starts anyway
[12:53] <ogra_> if we move to slower disks i guess we can drop the future target though
[12:54] <ogra_> (i found even the 20min to long that we had before, though there is still a lot overhead due to still producing tarballs in parallel to the snaps)
[12:54] <cjwatson> I doubt that it's about slower disks, it's just that it's running on cloud instances and there'll be other stuff happening on the same compute nodes
[12:55] <cjwatson> you'll probably find that variability is a bit higher
[12:55] <cjwatson> measure across more than one build, separate into those that hit bos01-arm64-* and those that didn't, and take a longer-term average
[12:55] <ogra_> right, though for rootfs i mostly care about disk I/O
[12:55] <cjwatson> no point worrying about what could be outliers
[12:55] <ogra_> ture
[12:56] <ogra_> heh, now i'm on "magic" lets see the difference (before was bos-*)
[12:57] <cjwatson> auburn, beebe, magic, templar, twombly are bare-metal
[13:03] <cjwatson> ogra_: will do some stats for you after this call
[13:03] <ogra_> wow, thanks
[13:04]  * ogra_ holds his breath hoping there will be kernel snaps now ... 2min to go on amd64
[13:06] <ogra_> heh, and i386 was faster
[13:06] <ogra_> YAY !
[13:07]  * ogra_ sees livecd.ubuntu-core.kernel.snap and livecd.ubuntu-core.os.snap at https://launchpad.net/~ubuntu-cdimage/+livefs/ubuntu/xenial/ubuntu-core-system-image/+build/54633
[13:08] <ogra_> now to wait for armhf which has special cases for raspi2
[13:09] <ogra_> bah
[13:09] <ogra_> + cp -a canonical-pc-linux_4.4.0-11-generic.efi.signed-20160309-13-06_amd64.snap livecd.ubuntu-core.kernel.snap
[13:09] <ogra_> i guess i need to fix that version string
[13:27] <marlinc> Test
[13:29] <ogra_> + ln -s vmlinuz-4.4.0-1003-raspi2 vmlinuz-4.4.0-11-generic vmlinuz
[13:29] <ogra_> ln: target 'vmlinuz' is not a directory
[13:29] <ogra_> GRRRR !
[13:31]  * ogra_ fixes ...
[14:30] <cjwatson> ogra_: so, at least if I consider only successful builds (on the basis that failures may well have had something arbitrarily weird going on), scalingstack is consistently quite a bit *quicker*, not slower
[14:30] <tgm4883> Is there any way to get dailies from a few days ago? Maybe March 4, 5, 6 or 7?
[14:31] <ogra_> cjwatson, heh, well, that doesnt really go with what i'm seeing then ... at east for the two arm64 builds we talked about
[14:31] <ogra_> *least
[14:31] <cjwatson> ogra_: this is a bit "baby's first matplotlib", but http://people.canonical.com/~cjwatson/tmp/arm64-stats.py -> http://people.canonical.com/~cjwatson/tmp/arm64-metal.png, http://people.canonical.com/~cjwatson/tmp/arm64-scalingstack.png
[14:31] <cjwatson> ogra_: if you want to argue with the aggregated data, show me where that's wrong :-)
[14:31] <cjwatson> ogra_: data > single anecdotes
[14:32] <ogra_> indeed
[14:32] <cjwatson> (I may have made a mistake, I just don't see it)
[14:32] <ogra_> is that data for image/rootfs builds ? or just a general aggregation ?
[14:32] <cjwatson> tgm4883: once it's expired off cdimage, it's gone unless somebody happens to have mirrored it
[14:32] <cjwatson> ogra_: livefs build times
[14:32] <ogra_> ok
[14:32] <cjwatson> ogra_: the script is there, you can see the methodology for yourself
[14:33] <ogra_> yeah (only looked at the pngs yet)
[14:33] <tgm4883> cjwatson: that's what I figured :/  Trying to figure out why our installs are hanging on mysql now but everything looks the same between the two dailies I have (Mar 3rd and Mar 8th)
[14:33] <cjwatson> requires python-{launchpadlib,matplotlib,pytimeparse}
[14:33] <tgm4883> err. *everything being the stuff that is either mythbuntu or mysql packaging
[14:33] <cjwatson> tgm4883: the build logs are kept essentially indefinitely though
[14:34] <tgm4883> cjwatson: That sounds like something I can dig through. Where do those build logs live?
[14:34] <cjwatson> tgm4883: http://people.canonical.com/~ubuntu-archive/cd-build-logs/
[14:34] <cjwatson> there are links from each of those to the livefs build log as well where relevant
[14:35] <tgm4883> cjwatson: cool thanks
[14:36] <cjwatson> ogra_: of course, if you're systematically cancelling "slow" builds, then that will have skewed the data in ways I can't do much about.  but at any rate, the main trend line of successful builds on scalingstack is faster than any of the successful builds on metal
[14:36] <ogra_> cjwatson, i havent canceled one in 6 months or so ...
[14:37] <ogra_> last time i did that there was still a 5min timeout in place too
[14:37] <ogra_> (which often enough is longer than waiting for the build to fail if i know it will fail, so i just let it run usually)
[14:40] <cjwatson> There's no five-minute timeout relevant to cancellation.  The only thing I can think of that you might be talking about is a three-minute timeout given to the builder to finish cancelling for itself before buildd-manager gives up on it.  That only applies if the builder somehow fails to terminate when told to, which would indicate that it's stuck at quite a low level.
[14:42] <cjwatson> In general cancellation goes around sending kill -9 to everything in the chroot, and a timeout would only be hit if that is not successful at killing everything.
[14:42] <ogra_> weird, i remember you telling me that there would be a delay ... though that was probably about ctrl-C'ing the job on nusakan
[14:43] <cjwatson> I think you're talking about something utterly different.
[14:43] <ogra_> (it was in the context of arm64 build failures not vanishing from the LP page)
[14:43] <cjwatson> The above has been the case since I implemented proper cancellation support in launchpad-buildd in 2013.
[14:43] <ogra_> (which has since long been fixed)
[14:44] <cjwatson> ogra_: That sounds like you're perhaps referring to bug 1424672, but that wasn't about a delay.
[14:44] <ogra_> well, if there is no delay then all is fine i guess :)
[14:45] <ogra_> i doubt it is worth doing forensics for my sieve memory :)
[14:45] <ogra_> yeah, it was in that context
[14:45] <cjwatson> If you SIGINT the job on nusakan, then there will be a delay before everything gets cleaned up, but it's not a fixed timeout; it's simply that nothing arranges to stop the LP build, so it will run to completion unless separately cancelled.
[14:46] <cjwatson> I find it worth explaining how things work rather than people learning myths from IRC logs :-)
[14:46] <ogra_> heh, indeed
[14:47] <ogra_> but yeah, i think it was about the nusakan side ...
[14:53] <cjwatson> I might know what you mean.  After a job finishes, cdimage waits for up to five minutes for Launchpad to fetch the build log from the builder.  If the builder has died sufficiently hard then fetching the build log might fail and in that case we would hit that timeout before cdimage gives up on the build.  However, that won't be the case in a normal case of cancellation.
[14:53] <cjwatson> A normal cancel still fetches the build log and cdimage will notice the next time it polls, which it does every 15 seconds.
[14:54] <ogra_> depends what you call "normal" ... but yeah, back then i couldnt look at the logs immediately due to the arm64 builds eating up the LP page and the mirror only running via cron
[14:55] <cjwatson> Which mirror are you talking about now?
[14:55] <cjwatson> Perhaps you mean cd-build-logs, which is only mirrored hourly.
[14:55] <ogra_> livefs build logs to people.c.c
[14:55] <ogra_> right
[14:55] <cjwatson> Actually, no, every 15 minutes, not hourly.
[14:55] <ogra_> i resorted back then to look on nusakan directly
[14:56] <cjwatson> We no longer mirror livefs build logs, and they never lived on nusakan in any case.
[14:56] <ogra_> but in that context the 5min thing kept sticking in my head :)
[14:56] <cjwatson> Also, even back then you could have got at the logs via the API :-)
[14:57] <ogra_> yeah
[15:02] <cjwatson> knome: done, belatedly
[20:13] <knome> cjwatson, thanks!
[20:43] <wxl> infinity: do you think lubuntu could get a separate lxqt image for y-cycle?
[23:42] <xnox> stgraber, ^