/srv/irclogs.ubuntu.com/2016/03/09/#ubuntu-release.txt

=== lamont` is now known as lamont
=== dax is now known as daxcat
=== lordieva1er is now known as lordievader
=== lordievader is now known as Guest45991
tai271828hi, I note 14.04.3 iso is not in old-releases url http://old-releases.ubuntu.com/releases/trusty/11:25
tai271828and still in http://releases.ubuntu.com/14.04/11:25
tai271828may someone fix this url link?? thanks11:26
apwtai271828, thanks for the report ... i am sure someone will sort that out by-and-by11:36
tai271828apw, cool, many thanks!11:38
=== Guest45991 is now known as lordievader
knomecould somebody bump the xubuntu ISO size limit up to 1.46GB (the capacity of an 8cm single-sided DVD, if anybody decides to use one)?12:11
=== flocculant_ is now known as flocculant
cjwatsonknome: do you have a source for the capacity, preferably one that gives it in bytes?12:40
* ogra_ sighs ... who is sitting on the arm64 builders today ... they seem to operate at half speed (rootfs buulds that took 30min yesterday take 1h today)12:45
cjwatsonknome: hm, looks like the real specs are paywalled, I guess I'll go with 1.46 * 10^912:45
cjwatsonogra_: you got a bare-metal builder yesterday by chance.  mostly the ones that are scalingstack instances are just fine and often faster, but perhaps livefs builds are a special case due to the amount of I/O involved12:47
ogra_ah, k12:47
cjwatsonI'm afraid we're likely to be decommissioning those bare-metal ones in the near future; we can't do sandboxing there and we're unlikely to ever be able to get more of them12:48
cjwatsonscalingstack is the future even if there are certain cases that go slower12:49
ogra_i know the build will fail, but was hesitating to hit the cancel button since that also takes 5 min to make it time out anyway and the build was already 30min in ...12:49
cjwatsonif it's having to time out, then something is wrong12:49
ogra_well, with tegh plans to do one image build per snappy commit a build time of 1h+ might kind of kill that idea again12:49
ogra_*with the12:50
cjwatsonmaybe it's a bad idea :)12:50
cjwatsonbut meh, it doesn't bother us from the build farm perspective12:50
ogra_heh, well, but from a queue perspective perhaps ... after 100 builds piled up there :)12:50
cjwatsongiven 36 builders (once they're all consolidated, surely not12:51
cjwatsons/,/),/12:51
ogra_yay, finally failed ...12:51
cjwatsonbut of course it depends on how frequent commits are12:51
* ogra_ re-kicks12:51
cjwatsona good strategy is often to try to queue a build on each commit but never have more than one build queued12:52
ogra_well, $mgmt asked for that, i suggested to go for 1/day for now but with keeping the 1/commit in mond for the future12:52
ogra_*mind12:52
cjwatsongiven that it will use the current versions of things at the point when the build starts anyway12:53
ogra_if we move to slower disks i guess we can drop the future target though12:53
ogra_(i found even the 20min to long that we had before, though there is still a lot overhead due to still producing tarballs in parallel to the snaps)12:54
cjwatsonI doubt that it's about slower disks, it's just that it's running on cloud instances and there'll be other stuff happening on the same compute nodes12:54
cjwatsonyou'll probably find that variability is a bit higher12:55
cjwatsonmeasure across more than one build, separate into those that hit bos01-arm64-* and those that didn't, and take a longer-term average12:55
ogra_right, though for rootfs i mostly care about disk I/O12:55
cjwatsonno point worrying about what could be outliers12:55
ogra_ture12:55
ogra_heh, now i'm on "magic" lets see the difference (before was bos-*)12:56
cjwatsonauburn, beebe, magic, templar, twombly are bare-metal12:57
cjwatsonogra_: will do some stats for you after this call13:03
ogra_wow, thanks13:03
* ogra_ holds his breath hoping there will be kernel snaps now ... 2min to go on amd6413:04
ogra_heh, and i386 was faster13:06
ogra_YAY !13:06
* ogra_ sees livecd.ubuntu-core.kernel.snap and livecd.ubuntu-core.os.snap at https://launchpad.net/~ubuntu-cdimage/+livefs/ubuntu/xenial/ubuntu-core-system-image/+build/5463313:07
ogra_now to wait for armhf which has special cases for raspi213:08
ogra_bah13:09
ogra_+ cp -a canonical-pc-linux_4.4.0-11-generic.efi.signed-20160309-13-06_amd64.snap livecd.ubuntu-core.kernel.snap13:09
ogra_i guess i need to fix that version string13:09
marlincTest13:27
ogra_+ ln -s vmlinuz-4.4.0-1003-raspi2 vmlinuz-4.4.0-11-generic vmlinuz13:29
ogra_ln: target 'vmlinuz' is not a directory13:29
ogra_GRRRR !13:29
* ogra_ fixes ...13:31
cjwatsonogra_: so, at least if I consider only successful builds (on the basis that failures may well have had something arbitrarily weird going on), scalingstack is consistently quite a bit *quicker*, not slower14:30
tgm4883Is there any way to get dailies from a few days ago? Maybe March 4, 5, 6 or 7?14:30
ogra_cjwatson, heh, well, that doesnt really go with what i'm seeing then ... at east for the two arm64 builds we talked about14:31
ogra_*least14:31
cjwatsonogra_: this is a bit "baby's first matplotlib", but http://people.canonical.com/~cjwatson/tmp/arm64-stats.py -> http://people.canonical.com/~cjwatson/tmp/arm64-metal.png, http://people.canonical.com/~cjwatson/tmp/arm64-scalingstack.png14:31
cjwatsonogra_: if you want to argue with the aggregated data, show me where that's wrong :-)14:31
cjwatsonogra_: data > single anecdotes14:31
ogra_indeed14:32
cjwatson(I may have made a mistake, I just don't see it)14:32
ogra_is that data for image/rootfs builds ? or just a general aggregation ?14:32
cjwatsontgm4883: once it's expired off cdimage, it's gone unless somebody happens to have mirrored it14:32
cjwatsonogra_: livefs build times14:32
ogra_ok14:32
cjwatsonogra_: the script is there, you can see the methodology for yourself14:32
ogra_yeah (only looked at the pngs yet)14:33
tgm4883cjwatson: that's what I figured :/  Trying to figure out why our installs are hanging on mysql now but everything looks the same between the two dailies I have (Mar 3rd and Mar 8th)14:33
cjwatsonrequires python-{launchpadlib,matplotlib,pytimeparse}14:33
tgm4883err. *everything being the stuff that is either mythbuntu or mysql packaging14:33
cjwatsontgm4883: the build logs are kept essentially indefinitely though14:33
tgm4883cjwatson: That sounds like something I can dig through. Where do those build logs live?14:34
cjwatsontgm4883: http://people.canonical.com/~ubuntu-archive/cd-build-logs/14:34
cjwatsonthere are links from each of those to the livefs build log as well where relevant14:34
tgm4883cjwatson: cool thanks14:35
cjwatsonogra_: of course, if you're systematically cancelling "slow" builds, then that will have skewed the data in ways I can't do much about.  but at any rate, the main trend line of successful builds on scalingstack is faster than any of the successful builds on metal14:36
ogra_cjwatson, i havent canceled one in 6 months or so ...14:36
ogra_last time i did that there was still a 5min timeout in place too14:37
ogra_(which often enough is longer than waiting for the build to fail if i know it will fail, so i just let it run usually)14:37
cjwatsonThere's no five-minute timeout relevant to cancellation.  The only thing I can think of that you might be talking about is a three-minute timeout given to the builder to finish cancelling for itself before buildd-manager gives up on it.  That only applies if the builder somehow fails to terminate when told to, which would indicate that it's stuck at quite a low level.14:40
cjwatsonIn general cancellation goes around sending kill -9 to everything in the chroot, and a timeout would only be hit if that is not successful at killing everything.14:42
ogra_weird, i remember you telling me that there would be a delay ... though that was probably about ctrl-C'ing the job on nusakan14:42
cjwatsonI think you're talking about something utterly different.14:43
ogra_(it was in the context of arm64 build failures not vanishing from the LP page)14:43
cjwatsonThe above has been the case since I implemented proper cancellation support in launchpad-buildd in 2013.14:43
ogra_(which has since long been fixed)14:43
cjwatsonogra_: That sounds like you're perhaps referring to bug 1424672, but that wasn't about a delay.14:44
ubot5bug 1424672 in Launchpad itself "LiveFS builds cancelled before they start sort above other builds in history" [Low,Fix released] https://launchpad.net/bugs/142467214:44
ogra_well, if there is no delay then all is fine i guess :)14:44
ogra_i doubt it is worth doing forensics for my sieve memory :)14:45
ogra_yeah, it was in that context14:45
cjwatsonIf you SIGINT the job on nusakan, then there will be a delay before everything gets cleaned up, but it's not a fixed timeout; it's simply that nothing arranges to stop the LP build, so it will run to completion unless separately cancelled.14:45
cjwatsonI find it worth explaining how things work rather than people learning myths from IRC logs :-)14:46
ogra_heh, indeed14:46
ogra_but yeah, i think it was about the nusakan side ...14:47
cjwatsonI might know what you mean.  After a job finishes, cdimage waits for up to five minutes for Launchpad to fetch the build log from the builder.  If the builder has died sufficiently hard then fetching the build log might fail and in that case we would hit that timeout before cdimage gives up on the build.  However, that won't be the case in a normal case of cancellation.14:53
cjwatsonA normal cancel still fetches the build log and cdimage will notice the next time it polls, which it does every 15 seconds.14:53
ogra_depends what you call "normal" ... but yeah, back then i couldnt look at the logs immediately due to the arm64 builds eating up the LP page and the mirror only running via cron14:54
cjwatsonWhich mirror are you talking about now?14:55
cjwatsonPerhaps you mean cd-build-logs, which is only mirrored hourly.14:55
ogra_livefs build logs to people.c.c14:55
ogra_right14:55
cjwatsonActually, no, every 15 minutes, not hourly.14:55
ogra_i resorted back then to look on nusakan directly14:55
cjwatsonWe no longer mirror livefs build logs, and they never lived on nusakan in any case.14:56
ogra_but in that context the 5min thing kept sticking in my head :)14:56
cjwatsonAlso, even back then you could have got at the logs via the API :-)14:56
ogra_yeah14:57
cjwatsonknome: done, belatedly15:02
=== utlemmin` is now known as utlemming
=== daxcat is now known as ezri
knomecjwatson, thanks!20:13
wxlinfinity: do you think lubuntu could get a separate lxqt image for y-cycle?20:43
=== ezri is now known as daxcat
=== michi is now known as Guest78920
xnoxstgraber, ^23:42

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!