=== lamont` is now known as lamont === dax is now known as daxcat === lordieva1er is now known as lordievader === lordievader is now known as Guest45991 [11:25] hi, I note 14.04.3 iso is not in old-releases url http://old-releases.ubuntu.com/releases/trusty/ [11:25] and still in http://releases.ubuntu.com/14.04/ [11:26] may someone fix this url link?? thanks [11:36] tai271828, thanks for the report ... i am sure someone will sort that out by-and-by [11:38] apw, cool, many thanks! === Guest45991 is now known as lordievader [12:11] could somebody bump the xubuntu ISO size limit up to 1.46GB (the capacity of an 8cm single-sided DVD, if anybody decides to use one)? === flocculant_ is now known as flocculant [12:40] knome: do you have a source for the capacity, preferably one that gives it in bytes? [12:45] * ogra_ sighs ... who is sitting on the arm64 builders today ... they seem to operate at half speed (rootfs buulds that took 30min yesterday take 1h today) [12:45] knome: hm, looks like the real specs are paywalled, I guess I'll go with 1.46 * 10^9 [12:47] ogra_: you got a bare-metal builder yesterday by chance. mostly the ones that are scalingstack instances are just fine and often faster, but perhaps livefs builds are a special case due to the amount of I/O involved [12:47] ah, k [12:48] I'm afraid we're likely to be decommissioning those bare-metal ones in the near future; we can't do sandboxing there and we're unlikely to ever be able to get more of them [12:49] scalingstack is the future even if there are certain cases that go slower [12:49] i know the build will fail, but was hesitating to hit the cancel button since that also takes 5 min to make it time out anyway and the build was already 30min in ... [12:49] if it's having to time out, then something is wrong [12:49] well, with tegh plans to do one image build per snappy commit a build time of 1h+ might kind of kill that idea again [12:50] *with the [12:50] maybe it's a bad idea :) [12:50] but meh, it doesn't bother us from the build farm perspective [12:50] heh, well, but from a queue perspective perhaps ... after 100 builds piled up there :) [12:51] given 36 builders (once they're all consolidated, surely not [12:51] s/,/),/ [12:51] yay, finally failed ... [12:51] but of course it depends on how frequent commits are [12:51] * ogra_ re-kicks [12:52] a good strategy is often to try to queue a build on each commit but never have more than one build queued [12:52] well, $mgmt asked for that, i suggested to go for 1/day for now but with keeping the 1/commit in mond for the future [12:52] *mind [12:53] given that it will use the current versions of things at the point when the build starts anyway [12:53] if we move to slower disks i guess we can drop the future target though [12:54] (i found even the 20min to long that we had before, though there is still a lot overhead due to still producing tarballs in parallel to the snaps) [12:54] I doubt that it's about slower disks, it's just that it's running on cloud instances and there'll be other stuff happening on the same compute nodes [12:55] you'll probably find that variability is a bit higher [12:55] measure across more than one build, separate into those that hit bos01-arm64-* and those that didn't, and take a longer-term average [12:55] right, though for rootfs i mostly care about disk I/O [12:55] no point worrying about what could be outliers [12:55] ture [12:56] heh, now i'm on "magic" lets see the difference (before was bos-*) [12:57] auburn, beebe, magic, templar, twombly are bare-metal [13:03] ogra_: will do some stats for you after this call [13:03] wow, thanks [13:04] * ogra_ holds his breath hoping there will be kernel snaps now ... 2min to go on amd64 [13:06] heh, and i386 was faster [13:06] YAY ! [13:07] * ogra_ sees livecd.ubuntu-core.kernel.snap and livecd.ubuntu-core.os.snap at https://launchpad.net/~ubuntu-cdimage/+livefs/ubuntu/xenial/ubuntu-core-system-image/+build/54633 [13:08] now to wait for armhf which has special cases for raspi2 [13:09] bah [13:09] + cp -a canonical-pc-linux_4.4.0-11-generic.efi.signed-20160309-13-06_amd64.snap livecd.ubuntu-core.kernel.snap [13:09] i guess i need to fix that version string [13:27] Test [13:29] + ln -s vmlinuz-4.4.0-1003-raspi2 vmlinuz-4.4.0-11-generic vmlinuz [13:29] ln: target 'vmlinuz' is not a directory [13:29] GRRRR ! [13:31] * ogra_ fixes ... [14:30] ogra_: so, at least if I consider only successful builds (on the basis that failures may well have had something arbitrarily weird going on), scalingstack is consistently quite a bit *quicker*, not slower [14:30] Is there any way to get dailies from a few days ago? Maybe March 4, 5, 6 or 7? [14:31] cjwatson, heh, well, that doesnt really go with what i'm seeing then ... at east for the two arm64 builds we talked about [14:31] *least [14:31] ogra_: this is a bit "baby's first matplotlib", but http://people.canonical.com/~cjwatson/tmp/arm64-stats.py -> http://people.canonical.com/~cjwatson/tmp/arm64-metal.png, http://people.canonical.com/~cjwatson/tmp/arm64-scalingstack.png [14:31] ogra_: if you want to argue with the aggregated data, show me where that's wrong :-) [14:31] ogra_: data > single anecdotes [14:32] indeed [14:32] (I may have made a mistake, I just don't see it) [14:32] is that data for image/rootfs builds ? or just a general aggregation ? [14:32] tgm4883: once it's expired off cdimage, it's gone unless somebody happens to have mirrored it [14:32] ogra_: livefs build times [14:32] ok [14:32] ogra_: the script is there, you can see the methodology for yourself [14:33] yeah (only looked at the pngs yet) [14:33] cjwatson: that's what I figured :/ Trying to figure out why our installs are hanging on mysql now but everything looks the same between the two dailies I have (Mar 3rd and Mar 8th) [14:33] requires python-{launchpadlib,matplotlib,pytimeparse} [14:33] err. *everything being the stuff that is either mythbuntu or mysql packaging [14:33] tgm4883: the build logs are kept essentially indefinitely though [14:34] cjwatson: That sounds like something I can dig through. Where do those build logs live? [14:34] tgm4883: http://people.canonical.com/~ubuntu-archive/cd-build-logs/ [14:34] there are links from each of those to the livefs build log as well where relevant [14:35] cjwatson: cool thanks [14:36] ogra_: of course, if you're systematically cancelling "slow" builds, then that will have skewed the data in ways I can't do much about. but at any rate, the main trend line of successful builds on scalingstack is faster than any of the successful builds on metal [14:36] cjwatson, i havent canceled one in 6 months or so ... [14:37] last time i did that there was still a 5min timeout in place too [14:37] (which often enough is longer than waiting for the build to fail if i know it will fail, so i just let it run usually) [14:40] There's no five-minute timeout relevant to cancellation. The only thing I can think of that you might be talking about is a three-minute timeout given to the builder to finish cancelling for itself before buildd-manager gives up on it. That only applies if the builder somehow fails to terminate when told to, which would indicate that it's stuck at quite a low level. [14:42] In general cancellation goes around sending kill -9 to everything in the chroot, and a timeout would only be hit if that is not successful at killing everything. [14:42] weird, i remember you telling me that there would be a delay ... though that was probably about ctrl-C'ing the job on nusakan [14:43] I think you're talking about something utterly different. [14:43] (it was in the context of arm64 build failures not vanishing from the LP page) [14:43] The above has been the case since I implemented proper cancellation support in launchpad-buildd in 2013. [14:43] (which has since long been fixed) [14:44] ogra_: That sounds like you're perhaps referring to bug 1424672, but that wasn't about a delay. [14:44] bug 1424672 in Launchpad itself "LiveFS builds cancelled before they start sort above other builds in history" [Low,Fix released] https://launchpad.net/bugs/1424672 [14:44] well, if there is no delay then all is fine i guess :) [14:45] i doubt it is worth doing forensics for my sieve memory :) [14:45] yeah, it was in that context [14:45] If you SIGINT the job on nusakan, then there will be a delay before everything gets cleaned up, but it's not a fixed timeout; it's simply that nothing arranges to stop the LP build, so it will run to completion unless separately cancelled. [14:46] I find it worth explaining how things work rather than people learning myths from IRC logs :-) [14:46] heh, indeed [14:47] but yeah, i think it was about the nusakan side ... [14:53] I might know what you mean. After a job finishes, cdimage waits for up to five minutes for Launchpad to fetch the build log from the builder. If the builder has died sufficiently hard then fetching the build log might fail and in that case we would hit that timeout before cdimage gives up on the build. However, that won't be the case in a normal case of cancellation. [14:53] A normal cancel still fetches the build log and cdimage will notice the next time it polls, which it does every 15 seconds. [14:54] depends what you call "normal" ... but yeah, back then i couldnt look at the logs immediately due to the arm64 builds eating up the LP page and the mirror only running via cron [14:55] Which mirror are you talking about now? [14:55] Perhaps you mean cd-build-logs, which is only mirrored hourly. [14:55] livefs build logs to people.c.c [14:55] right [14:55] Actually, no, every 15 minutes, not hourly. [14:55] i resorted back then to look on nusakan directly [14:56] We no longer mirror livefs build logs, and they never lived on nusakan in any case. [14:56] but in that context the 5min thing kept sticking in my head :) [14:56] Also, even back then you could have got at the logs via the API :-) [14:57] yeah [15:02] knome: done, belatedly === utlemmin` is now known as utlemming === daxcat is now known as ezri [20:13] cjwatson, thanks! [20:43] infinity: do you think lubuntu could get a separate lxqt image for y-cycle? === ezri is now known as daxcat === michi is now known as Guest78920 [23:42] stgraber, ^