[05:34] <pitti> bdmurray: we could surely; we can't search for HookError_* attachments?
[05:35] <pitti> sarnold: great to hear!
[05:36] <sarnold> good morning pitti :)
[05:37] <sarnold> pitti: do you mean to find bugs like 1582992 ?
[05:38] <pitti> yes
[05:38] <sarnold> does that mean I get to skip reporting them when I find them? :)
[05:39] <pitti> depends, if we want to track SRUs of broken hooks, or not assume that everyone looks for this tag, we should probably continue to have proper bugs for them
[05:40] <sarnold> that makes sense
[05:56] <karstensrage> it takes me about 20 minutes to test out my modules, they kind of go together so you can test them both at the same time
[05:56] <karstensrage> so round to 1 hour, could i offer 4 hours of my time testing another backport request or w/e in trade for getting my backport requests looked at?
[05:57] <karstensrage> michagogo if you are micahg would that be something you would consider?
[06:00] <karstensrage> the four hours is just something to consider, like nothing like tomcat or java but something in the same ballpark as my modules in terms of effort and cognitive load and ill offer to do two or something
[06:01] <sarnold> karstensrage: btw, what does the backports offer that e.g. a ppa doesn't? either method requires administrator effort to install, right?
[06:01] <karstensrage> yes
[06:02] <karstensrage> either method should be done by an administrator
[06:02] <estan> hi, sorry if this is obvious, but what's the policy of updating packages in 16.04, will only security fixes go in? because 16.04 was released, quite unfortunately, with Octave 4.0.0, which had several bugs. 4.0.2 (which is now in Debian stretch / sid) is much better. any chance of it being updated?
[06:03] <sarnold> estan: you can SRU updates, it takes some paperwork and testing, but ought to be tolerable: https://wiki.ubuntu.com/StableReleaseUpdates
[06:11] <estan> sarnold: thanks! we're basing our product on 16.04, and compiling Octave is such a chore.
[06:13] <cpaelzer> good morning
[06:15] <michagogo> karstensrage: eh? Don't know who that is
[07:13] <estan> sarnold: i'm a bit new to this, do you know if the procedure for requesting SRUs is special in any way when the updated package is in stretch?
[07:16] <estan> sarnold: should i just refer to the debian package and suggest they pull it, with a motivation why the bug is grave? (in this case, it's a regression with the risk of data loss).
[07:17] <sarnold> estan: a sync from debian may indeed be useful.. the 4.0.1 changelog was -huge-.. I didn't spot a 4.0.2 changelog, is it similar?
[07:17] <estan> but i realize now that not all changes in octave 4.0.1 and 4.0.2 are okay for SRU, so maybe just that fix should be included?
[07:17] <estan> hold on, i'll have a look.
[07:18] <sarnold> back in the day there used to be a "MicroReleaseException" process that packages could apply for...
[07:18] <sarnold> it's been mostly extended to anything with high-quality release processes... I hope that includes octave, but it may not.
[07:19] <estan> yea. i'm not sure. i would hope it does :)
[07:19] <RAOF> High quality release process or significant in-archive testing.
[07:19] <RAOF> (ie: If you've got an excellent set of DEP-8 tests you may also win)
[07:22] <estan> hm okay.
[07:22] <estan> the changelog for 4.0.2 was smaller (had to look in the archive for it: ftp://ftp.gnu.org/gnu/octave/octave-4.0.2.tar.gz).
[07:24] <estan> but considering the many changes, perhaps it's better to request the 4.0.0 package be patched with the fix i'm thinking of? (it's a bug in the reading of HDF5 format using Octave's open(..): http://savannah.gnu.org/bugs/?45225).
[07:25] <estan> err, load()/save().
[07:26] <sarnold> that would probably be easier
[07:26] <sarnold> but that changelog.. heh
[07:26] <estan> yea.. i don't feel like doing an SRU for all that now :)
[07:27] <estan> this is the fix: http://hg.savannah.gnu.org/hgweb/octave/rev/d54aa96abadf , and they added a test case for it.
[07:28] <estan> i'll try to make a patched package then (never done it before). in the middle of a dist upgrade atm, so it'll have to be a little later.
[07:28] <sarnold> yeah that patch looks way easier to sru :)
[07:32] <estan> the existing 4.0.0 package seems to do "make check" as part of the package building, and the tests seems fairly thorough, so that's good.
[08:27] <farblue> Hi all :) I think I know the reason for the problems I’m having with my fan network setup and it’s the SNAT rule added to iptables by fanctl. I need the ‘to:’ to be a different IP address. does anyone know a sensible way to configure this?
[08:32] <TJ-> cyphermox: could you tell me if bug 1582899 (live-installer / expert mode) is one of your areas of interest still?
[09:17] <estan> hm, could someone help me with the bzr command to get the xenial-proposed of the octave package? i'm trying to make my first patch.
[09:21] <caribou_> estan: I don't see any octave in xenial-proposed but there is one in yakkety-proposed;
[09:21] <caribou_> estan: you can get it with pull-lp-source octave yakkety (from the ubuntu-dev-tools package)
[09:22] <pitti> estan: there are no more automatic (UDD) bzr branches for xenial, just use apt-get source or pull-lp-source ^
[09:25] <estan> caribou_ pitti: alright. my end goal is to create a bug report + SRU for http://savannah.gnu.org/bugs/?45225#attached , it's just that i've never done any Debian packaging, and never used bzr, so i'm a little lost :p
[09:26] <estan> i started with the Getting Set Up guide to get a launchpad account, GPG key etc, and now working my way through http://packaging.ubuntu.com/html/fixing-a-bug.html
[09:30] <caribou_> estan: Traditional packaging might be more helpful in your case : http://packaging.ubuntu.com/html/traditional-packaging.html
[09:37] <FourDollars> happyaron: Could you help to sponsor my patch for https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1582301?
[10:19] <cking> is there any reason why libjson0-dev is not in yakkety?
[10:23] <cjwatson> cking: replaced by libjson-c-dev
[10:23] <cking> cjwatson, ok, thanks
[10:23] <cjwatson> cking: (it was a transitional package in xenial)
[10:23] <cking> cjwatson, ah, I should have spotted that
[10:25] <TJ-> cjwatson: do you have anything to do with live-installer package any more?
[10:25] <cjwatson> TJ-: no
[10:25] <TJ-> cjwatson: thought so but worth the ask :)
[10:27] <TJ-> cjwatson: 2nd (and last!) Q: Do you know if with the ubuntu-server ISO installer, it selects between base-installer and live-installer depending on whether boot-time 'expert mode' is selected?
[10:30] <cjwatson> TJ-: err I don't think so, IIRC that's purely about how the image was built (i.e. it uses live-installer if it's present); if you need to disable live-installer then you can preseed live-installer/enable=false
[10:31] <cjwatson> TJ-: but the ISO image won't have the debs required for base-installer to work so that probably isn't very useful :)
[10:31] <TJ-> cjwatson: OK, the reason for asking is a user reported a weird bug in expert mode that boils down to live-installer not mounting devtmpfs in the /target/ before doing update-initramfs, resulting in failing to locate the device for /
[10:32] <TJ-> cjwatson: I noticed the ISO's pool/ does have base-installer but I failed to understand if it was used in any circumstance.
[10:32] <cjwatson> TJ-: base-installer has common code used by both the bootstrap-base and live-installer frontends
[10:32] <TJ-> cjwatson: the bug seems to be that the live-installer postinst script has the crucial "waypoint 1 setup_dev" commented out
[10:33] <cjwatson> TJ-: so it's actually about switching between bootstrap-base and live-installer, not switching between base-installer and live-installer
[10:33] <TJ-> cjwatson: I'm failing to understand why it only fails in expert mode
[10:33] <cjwatson> pass
[10:33] <TJ-> cjwatson: ahhh, bootstrap-base, yes, that was the code I was reading originally before I spotted live-installer running
[10:35] <TJ-> I'll bug cyphermox about it when he's around :)
[11:07] <happyaron> FourDollars: is it accepted by upstream? if so no problem
[11:17] <FourDollars> happyaron: It is not accepted by the upstream yet, but seb128 asks me to work with you.
[11:18] <seb128> FourDollars, happyaron, it was discussed on the upstream list, but yeah we should maybe wait for them to review/ack to backport/upload
[13:45] <rharper> Hi, I've a debdiff that fixes 3 bugs I've filed against bridge-utils package;  should I attach the same debdiff against each bug? Or create separate patches/debdiffs for each one individually?
[13:49] <cyphermox> rharper: you can attach it to just one, I'll review it now?
[13:49] <rharper> cyphermox: sure
[13:51] <rharper> cyphermox: https://bugs.launchpad.net/ubuntu/+source/bridge-utils/+bug/1576858
[13:52] <rharper> the other two are: https://bugs.launchpad.net/ubuntu/+source/bridge-utils/+bug/1576876 and https://bugs.launchpad.net/ubuntu/+source/bridge-utils/+bug/1576870
[13:54] <TJ-> cyphermox: could you at some point look at bug 1582899  - not sure if you're still covering the package
[14:01] <pitti> cjwatson: hey Colin, how are you?
[14:01] <bdmurray> pitti: Easily search LP for bugs with an attachment name? Not as far as I know.
[14:02] <pitti> cjwatson: do you still remember how http://people.canonical.com/~ubuntu-archive/proposed-migration/update_output.txt is produced?
[14:02] <pitti> cjwatson: I find a reference in britney1-ubuntu run_b1(), which calls ./update_out.py; but neither run-proposed-migration nor britney1-ubuntu actually seem to call that
[14:02] <sinzui>  rbasak: is there anything I need to do regarding the backport of juju 1.25.5 to wily and trusty per bug 1556981 . Do I poke someone on the SRU team?
[14:03] <pitti> cjwatson: and I don't see another invocation of update_out.py (and it looks like update_output.txt is what this script produced)
[14:12] <cyphermox> rharper: sorry, got distracted by half my coffee going in my cup, and the other half oozing from all sides of the machine onto the countertop
[14:12] <cyphermox> rharper: -can have a fractional part.
[14:12] <cyphermox> +can have a fractional part.  Available on kernel versions < TBD ?
[14:13] <rharper> cyphermox: thx;  right;  I'll update with the data;
[14:15] <cyphermox> other than that it seems alright
[14:15] <cyphermox> it's for xenial?
[14:15] <rharper> cyphermox: it's not available on any of the supported kernels we have;  I just need to find which kernel release dropped that
[14:15] <rharper> cyphermox: yakkety and it ideally should go back to X and T and P IMO;
[14:16] <rharper> the big issue is the per-port settings
[14:16] <cyphermox> ok. your changelog reads xenial
[14:16] <rharper> that's where I tested it
[14:16] <cyphermox> cool cool
[14:16] <rharper> since I didn't have a yakkety cloud-image yet
[14:16] <rharper> also, thoughts on whether the bridge script should be -e ?
[14:16] <rharper> set -e ;
[14:16] <cyphermox> I'd be tempted to say no
[14:17] <rharper> when you apply values to things like port priority and it fails
[14:17] <cyphermox> would it be better to not have a bridge or to have a partial bridge with maybe missing options?
[14:17] <rharper> I suppose it depends on the configurer of the bridge
[14:17] <rharper> if you left stp on for example
[14:17] <cyphermox> oh, yeah
[14:17] <rharper> that might cause trouble for networks with loops
[14:18] <cyphermox> it definitely would break in fun ways in that case
[14:18] <rharper> I'm not sure about the SRU impact
[14:18] <rharper> I would expect folks would have already complained
[14:18] <cyphermox> for SRU it probably should not be changed
[14:18] <rharper> but it's a pretty clear bug that if someone on trusty did want to apply settings, they can't
[14:19] <rharper> right, the set -e; agreed;  what about the current patch for per-port settings ?
[14:19] <cyphermox> it makes sense to me; I see not issue with it
[14:19] <rharper> ok; let me clean up the manpage sentence on gc_timer/gc_int
[14:20] <cyphermox> I'm not sure if it's a very common use-case, but this would work
[14:20] <rharper> it isn't I think
[14:20] <rharper> but I was extending our bridge config testing in curtin
[14:20] <rharper> and wrote a testcase for validating bridge settings
[14:20] <rharper> noticed the b0rkage and thus the bugs
[14:23] <cjwatson> pitti: in britney1: britney:    cp $DATA_B2/output/$SERIES/output.txt $HTML/$SERIES/update_output.txt
[14:23] <cjwatson> pitti: so then grep for just output.txt in britney2 and you'll find it
[14:24] <cjwatson> or indeed case-insensitively for upgrade_output
[14:24] <pitti> cjwatson: ah, thanks; so we don't use update_out.py at all?
[14:24] <cjwatson> pitti: not the one in britney1, no.  I just didn't delete it to avoid making merges even harder
[14:24] <pitti> cjwatson: background is, I'd like to add something like update_output_notest.txt which ignores autopkgtests, so that it's easier to untangle transitions
[14:25] <pitti> cjwatson: thanks, that clarifies a lot
[14:27] <cjwatson> pitti: Probably simplest to do a separate britney2 run with different configuration.
[14:27] <pitti> cjwatson: right, that was the idea (with --dry-run or so, to not spit out the list of packages to migrate)
[14:28] <pitti> (that was HeidiResult or something similar, no?)
[14:28] <pitti> with ADT_ENABLE=no
[14:29] <cjwatson> something like that, yeah
[14:29] <cjwatson> you mostly just need to take care to avoid overwriting the delta from another run; britney2 doesn't do the actual copies itself
[14:30] <cjwatson> I'd recommend generating the config dynamically in britney1, in order to avoid future problems with config drift
[14:30] <cjwatson> i.e. sed over britney.conf
[14:33] <pitti> cjwatson: *nod*; is HEIDI_OUTPUT that delta? or is it taken from output.txt?
[14:35] <cjwatson> it's not taken from output.txt.  IIRC HEIDI_OUTPUT or something very similar is the delta
[14:36] <cjwatson> yeah, HEIDI_OUTPUT sounds right.  (heidi is the old name for dak copy-suite, I think)
[14:37] <cjwatson> dak control-suite rather
[15:00] <cyphermox> rharper: done?
[15:03] <rharper> yeah, lemme update the patch
[15:04] <cyphermox> ok
[15:05] <rharper> ok, updated
[15:06] <rharper> for 2.6.0 or older, you can have gc_int ; and port priority has max 255, default 128;, newer has no gc_int and a port priority of 63, defaults to 32.
[15:06]  * rharper downloaded 2.6.0-tar.bz2 for the first time in like a decade 
[15:06] <cyphermox> oh wow, that is kind of old
[15:07] <rharper> yeah, Linux git didn't have anything older than 2.6.12
[15:24] <nacc> rharper: i think there is a historical linux tree
[15:24] <nacc> rharper: but it's accuracy is ... imperfect :)
[15:25] <nacc> rharper: i think it was a side-import from bk, not endorsed by Linus (I think technically he might not have been allowed to endorse it, random politics)
[15:40] <rharper> nacc: hehe
[16:23] <karstensrage> is there a better way to find a backport tester (42 members) than randomly pinging nick in irc?
[16:23] <nacc> karstensrage: i would think a 'backport tester' is anyone ... a backports team member is distinct?
[16:23] <nacc> karstensrage: meaning, if you have a backported package in a PPA, anyone could test it
[16:24] <karstensrage> https://launchpad.net/~ubuntu-backports-testers
[16:24] <karstensrage> nacc, how would i go about finding this "anyone"
[16:26] <nacc> karstensrage: well, i mean, if you're backporting something, there must be a use-case or bug or something. So you find people affected by that bug and they test it... ?
[16:26] <karstensrage> nacc, this is just a new thing
[16:27] <karstensrage> it made into debian-testing, got pulled from unstable to xenial proposed and is now in xenial
[16:27] <karstensrage> would also like it in trusty and precise
[16:27] <infinity> What makes you think you need a third party tester?
[16:27] <karstensrage> i dont know what i need, i cant get any traction on the backport, seems like the backport list is huge and no one is doing them
[16:28] <infinity> The backports team might be a bit understaffed right now.
[16:28] <infinity> But the testing stuff can be done entirely by you.
[16:28] <karstensrage> ok done and done
[16:30] <karstensrage> so if thats the case, and since they understaffed, and its tested and working, can someone else just push the button?
[16:41] <karstensrage> infinity, you might have been the one that pulled it into xenial
[16:42] <infinity> karstensrage: I was, but I don't do backports stuff.
[16:42] <karstensrage> i thank you for that, that was unexpectedly seamless
[16:45] <karstensrage> no freaking way
[16:45] <karstensrage> Laney, i could hug you
[16:46] <estan> how long does it usually take between uploading a ppa package before i can see it there?
[16:46] <karstensrage> estan, usually very fast, or you get an email saying it was rejected
[16:46] <karstensrage> like 5 minutes?
[16:47] <estan> karstensrage: hm ok. weird. i uploaded quite a while ago and havent gotten a mail. the upload command was sucessful.
[16:47] <karstensrage> check spam?
[16:48] <karstensrage> the upload can be successful but it still be rejected for all kinds of reasons esp if you dont change the version at all
[16:49] <karstensrage> Laney, if you have any questions please feel free
[16:49] <karstensrage> if you want my direct number i can provide that
[16:49] <Laney> karstensrage: questions about what?
[16:50] <Laney> what's the best way to retire at 35?
[16:50] <karstensrage> sell meth
[16:50] <karstensrage> :P
[16:50] <estan> yea, just thought i would have gotten a rejection by now in case somethibg was wrong. nothing in spam.
[16:50] <estan> would it come from launchpad.net?
[16:51] <infinity> estan: You won't get an email if you forgot to sign it.
[16:51] <karstensrage> Laney, didnt you just handle one of my backports?
[16:51] <karstensrage> i meant questions about the other one
[16:51] <Laney> don't know
[16:52] <estan> infinity: it refused to upload first since i forgot, then i signed and it succeeded.
[16:52] <Laney> I handled some backports
[16:52] <Laney> don't know if any of them were yours
[16:52] <karstensrage> oic
[16:52] <Laney> but if you're happy, that's nice
[16:52] <karstensrage> well one was
[16:52] <Laney> :)
[16:52] <karstensrage> so thank you
[16:52] <estan> ill look closer when im home, on the bus now.
[16:55] <karstensrage> Laney, would you consider doing the other one?
[16:55] <karstensrage> please?
[16:55] <Laney> It depends on the first
[16:55] <Laney> So I can't build it until that one is done
[16:55] <karstensrage> yes
[16:55] <karstensrage> it seems done?
[16:56] <karstensrage> i see it in precise
[16:56] <Laney> Takes a while to get to the mirrors
[16:56] <karstensrage> ill check trusty
[16:56] <karstensrage> ok
[17:09] <estan> infinity: seems i signed with the wrong key. i thought i'd pick the one i set GPGKEY to in the environment, but i should have set DEBSIGN_KEYID (or passed -k).
[17:14] <estan> infinity: meh: "Package has already been uploaded to ppa on ppa.launchpad.net". i uploaded with the wrong key. now i get that after trying to upload the debsign --re-sign'ed package. do you happen to know what i can do now?
[17:15] <estan> infinity: ah, -f.
[17:15] <estan> working now \o/
[18:20] <jynik> I see that the releases linked from
[18:21] <jynik> https://wiki.ubuntu.com/Core all 404. Any chance someone could point me to the associated build scripts?
[18:37] <rbasak> sinzui: yeah it's waiting on the SRU team. See https://launchpad.net/ubuntu/trusty/+queue?queue_state=1 and equivalent for Wily.
[18:38] <sinzui> rbasak: oh, "unappoved" thank you. Looked at "new"
[18:44] <sarnold> jynik: odd, try theseinstad http://cdimage.ubuntu.com/ubuntu-core/xenial/daily-preinstalled/
[18:45] <jynik> sarnold: Yeah, I did see those. Trying to lend someone a hand who was using the 14.04 LTS and not quite done getting ready to jump to the 16.04 LTS.
[18:46] <jynik> Was hoping I could at least find whatever build scripts/tools/etc. to rebuild the 14.04.x LTS Core image until they finish transitioning
[18:47] <nacc> jynik: well, LTS -> LTS upgrades aren't offered ntil july anyways
[18:48] <jynik> nacc: I see. Even more of a reason for me to track down where the 14.04 Core images my colleague needs went then.
[18:49] <jynik> I'm hunting around the dev wiki - just having some trouble finding how these core images are built.
[18:49] <dobey> there were 14.04 core images?
[18:50] <ogra_> yes
[18:50] <jynik> https://web.archive.org/web/*/http://cdimage.ubuntu.com/ubuntu-core/releases/14.04/release/
[18:50] <ogra_> they were renamed tto ubuntu-base
[18:50] <dobey> oh
[18:51] <jynik> ogra_: Ah, to avoid confusion with Snappy Core?
[18:51] <ogra_> look at http://cdimage.ubuntu.com/ubuntu-base/releases/ or some such
[18:51] <ogra_> yeah
[18:51] <dobey> they are the pre-snappy minimal ubuntu installs?
[18:51] <ogra_> yeah
[18:51] <dobey> ah ok, that makes sense
[18:51] <ogra_> and they still get built, just under a new name
[18:51] <jynik> ogra_: Thank you. While I have your attention - any chance you could point me to any docs on how these are being built?
[18:52] <dobey> i guess someone should update the wiki page, and maybe add a "Base" wiki page and point people to it for old stuff
[18:52] <ogra_> they are built on a launchpad-buildd using live-build ... the config for it lives in the livecd-rootfs package
[18:53] <jynik> ogra_: Excellent, many thanks.
[18:55] <ogra_> dobey, well, perhaps there are reasons that the page was removed completely ... slangasek would know
[18:56] <dobey> ogra_: ? https://wiki.ubuntu.com/Core is very much still there
[18:56] <ogra_> oh
[20:01] <hallyn> pitti: hey - i have a tiny reproducer package-set for bug 1579922 (left the code links in the last comment)
[20:01] <hallyn> solving that is the key to fixing the libvirt upgrade problem...
[20:06] <hallyn> arges: bug 1583009 , is the fix you uploaded for libvirt?  (not seeing it in rmadison)
[20:20] <infinity> hallyn: https://launchpad.net/ubuntu/+source/libvirt/1.3.4-1ubuntu2
[20:20] <infinity> hallyn: You're just impatient. ;)
[20:21] <hallyn> tis true.  thx
[20:24] <hallyn> arges: libvirt in sid doesn't have that, i wonder if they have a reason.
[20:25] <pitti> hallyn: cheers, I'll look at that
[20:25] <patcable> what's the usual length of time it takes from a package to go from being in -proposed to being something people will get when they apt-get upgrade?
[20:25] <patcable> (provided verification happened)
[20:26] <infinity> patcable: About a week.
[20:27] <patcable> exciting, thanks
[20:34] <Bluefoxicy> for point releases on Xenial, do you add new software packages to universe?
[20:38] <infinity> Bluefoxicy: We sometimes add new packages, yes.
[20:41] <Bluefoxicy> infinity:  cool.  What happens if a package is main and installed by default in a newer release, but also gets added to a prior LTS?  Does it go to universe?
[20:42] <infinity> Bluefoxicy: It wouldn't be added to an older release without reason to do so, we don't just pull packages in willy-nilly.  So, "depends".
[20:43] <Bluefoxicy> nod
[20:43] <Bluefoxicy> I'm trying to write a zram manager and just curious what I should target as a long-term roadmap
[20:44] <infinity> A zram manager?  Does it really need management?
[20:45] <infinity> Or do you mean "enhancing zram-config to take a conffile"?
[20:45] <Bluefoxicy> No, I mean an active manager.
[20:45] <Bluefoxicy> I can pull statistics on how much data is stored in zram and how much RAM it takes up; it's trivial to write a few simple rules to manage it--i.e. to determine if there's less than $Xmin or more than $Xmax available swap, if the amount of RAM consumed by zram swap is below $Y maximum, and so forth
[20:46] <infinity> Yeah, that doesn't sound like a thing we'd pull back into a stable release.
[20:46] <Bluefoxicy> so you can say, "Use up to 50% of my RAM as zram", and zram will create swap space as it populates, and remove swap space as it depopulates
[20:47] <jtaylor> sounds a bit like you want to use zswap instead
[20:47] <Bluefoxicy> the problem with zram-config is it just says, "Make X amount of swap available"; if you fill it with almost-uncompressible data, it uses nearly so much RAM.  if you fill it with highly-compressible data, it uses almost no RAM.  You're rolling some dice and hoping for the best
[20:47] <jtaylor> its config is max percent of ram and doesn't reserve any ram
[20:48] <Bluefoxicy> jtaylor:  I'm not interested in writing back to disk as a cop-out to say "stop eating my RAM with this"; more importantly, I'm not interested in overprovisioning disk-based swap in an estimate of what's likely to achieve goals I can target directly.
[20:48] <Bluefoxicy> last I checked, zswap cached a disk swap partition into compressed memory
[20:48] <Bluefoxicy> that's interesting in its own right
[20:49] <jtaylor> usually your disk is much larger than ram so its not really much overprovisioning
[20:49] <Bluefoxicy> I've long-abandoned disk-backed swap as any significant portion of swap space, though, because while 8MB of swap space is reasonable in a world where 16MB is a lot of RAM, 12GB of swap space is not usable in a world where you have 24GB of actual RAM
[20:50] <sarnold> the installer created a 120 gigabyte swap partition for me because I had 128 gigs of ram. this left about 800 megabytes for /.
[20:50] <jtaylor> hehe
[20:50] <Bluefoxicy> sarnold: lol what
[20:51] <jtaylor> and then it still creats a 1gb boot which files up with like 5 kernels ;)
[20:51] <Bluefoxicy> sarnold: a 5tb disk is like $110 now, goway
[20:51] <sarnold> jtaylor: bingo :)
[20:51] <Bluefoxicy> jtaylor: RHEL6 still creates a 99MB /boot and then freaks out every time you try to upgrade if you already have 2 kernels installed
[20:51] <sarnold> Bluefoxicy: heh, I've got nine 3tb drives in this machine, plan on adding another six soon... the 120 gig SSDs are just for the OS :)
[20:52] <Bluefoxicy> infinity:  anyway I figured it'd be useful as an available software package, but I'm trying to get away from the installer defaulting to making a swap partition--it would at least close bug #-99999Sarnold
[20:53] <sarnold> oops. I broke Bluefoxicy :)
[20:53] <Bluefoxicy> negative overflow :)
[20:54] <infinity> Some use cases still need real swap, no matter how much we wish they didn't.  The first step to get away from swap partitions (by default) is to acknowledge that we can't create RAM out of thin air, but swap partitions suck, and switch to swap files.
[20:54] <Bluefoxicy> there's a swapd that creates swap files on-the-fly similar to what I described
[20:54] <infinity> If you mix zram and swap files together, the fact that swap files are a bit slower and icker is made up for by zram giving you some breathing room, and everyone wins.  Ish.
[20:55] <Bluefoxicy> why do some things need real swap?  Memory is just too tight?
[20:55] <infinity> Not everyone has gobs of RAM.
[20:55] <infinity> Such is life.
[20:55] <infinity> Not all computers are created equal.
[20:55] <infinity> And, on the other end of the spectrum, some computers are HUGE, but run even more huge working sets.
[20:56] <infinity> And extending to disk beats not being able to compute your weird set.
[20:56] <pitti> xnox: FYI, upstart autopkgtests have been looping/breaking testbeds since yesterday, I'll blacklist them now (i. e. they'll stay in progress, but they stop DoSing the workers)
[20:56] <Bluefoxicy> generally the ones with minimal RAM are things like phones and tablets
[20:56] <pitti> xnox: as the test workers are still KDE/Qt-DoSed, need to make some room
[20:56] <infinity> Bluefoxicy: Or old computers that suck.
[20:56] <Bluefoxicy> on those architectures, it's traditional to not use swap because of some odd fear of destroying the nand
[20:56] <infinity> Bluefoxicy: But, as I note, there's also the high end.  Some workloads just really need more RAM than it's feasible to buy, and slow beats not running.
[20:57] <Bluefoxicy> infinity:  I think high-end HPEC and scientific computing are special cases and we shouldn't just partition out 9000 gigs of swap space because you might be building a nuclear bomb simulator :P
[20:57] <infinity> To be fair, the installer can't really target those high end users, because we have no idea how much RAM they need.
[20:57] <Bluefoxicy> right
[20:58] <infinity> Still, "old computers suck" is a totally valid installation target.
[20:58] <infinity> Just because my laptop has 20G of RAM doesn't mean they all do.
[20:58] <Bluefoxicy> it's trivial to add functionality to create/use swap files at any path based on any rules
[20:58] <sarnold> and it's amazing how quickly "new computer" turns into "sucky old computer".
[20:58] <Bluefoxicy> also, in terms of old computers sucking, I don't think you can revive them by adding 2G more disk RAM to a 1G system
[20:58] <Bluefoxicy> I tried that
[20:59] <Bluefoxicy> what I got was "CHROME found another flash video that locks up my computer and requires me to unplug it"
[20:59] <Bluefoxicy> that has actually happened to me with 24GB of RAM installed and 4GB of swap, which is why I disabled disk swap entirely
[20:59] <Bluefoxicy> (it will ALSO happen with large amounts of lz4 zram, or zswap, of course)
[21:00] <Bluefoxicy> (...less bad, maybe, but it'll still happen)
[21:00] <infinity> sarnold: Yeah, I have a Nehalem i7 with 6G of RAM that may as well be a 486 for how crap it looks next to my laptop.
[21:00] <Bluefoxicy> hey man
[21:00] <Bluefoxicy> the 486 was a solid processor
[21:00] <Bluefoxicy> Quake 1 had full, real-time, software-rendered 3D, but required a 486SX 33mhz with 8 megabytes of RAM
[21:00] <Bluefoxicy> not even a math coprocessor
[21:00] <infinity> If by 486, you mean the AMD 5x86 486 clone, I agree.
[21:00] <Bluefoxicy> which, mostly, worked because John Carmack is a freaking wizard from another dimension
[21:01] <infinity> Either way, my "beastly desktop gaming machine" is now "an old crap computer", just proving sarnold's point. :P
[21:01] <Bluefoxicy> ha
[21:01] <Bluefoxicy> I dispose of my machines after 10 years
[21:01] <infinity> The Nehalem is a lot less than 10 years old. :P
[21:02] <infinity> Hrm.  2008.  Well, not "a lot".
[21:02] <infinity> That computer is a bit newer than that, though.
[21:02] <Bluefoxicy> I got rid of my AMD64 X2 1.9GHz Barton core when a fan controller physically burned out from electrical wear due to being older than modern computing.
[21:03] <Bluefoxicy> I then bought a Core i5 at 3GHz :)
[21:04] <Bluefoxicy> Exciting times.  The AMD64 would overheat and shut down the system at 1.9GHz, but not at 1.8GHz, so I had a CPU power management daemon throttle it to 1.8GHz if the CPU core temperature reached 78C, and unthrottle it at 75C
[21:06]  * Bluefoxicy gets back to work
[21:17] <JanC> swap would be useful for moving leaked (and thus unused) memory out of RAM, except usually the leaked/unused memory is fragmented & mixed with used memory, meaning it get swapped in all the time  :)
[21:17] <Bluefoxicy> there are ways to fix tht
[21:17] <Bluefoxicy> that
[21:18] <JanC> in theory, yes
[21:18] <JanC> not leaking memory would be a good start, not fragmenting memory would be good too  ;)
[21:19] <Bluefoxicy> things like Mono use compacting memory managers and physically colocate highly-used memory.  For non-managed memory allocators, it's a notably good strategy to allocate allocations from the same thread close together; allocate similar-sized allocations together; and allocate large allocations as anonymous mappings
[21:20] <Bluefoxicy> some 10 years ago I tried to write a ptmalloc replacement on Hoary, but I'm not a programmer and got myself in all kinds of trouble I didn't know how to resolve :P
[21:21] <Bluefoxicy> (I still say brk() is outdated)
[22:15] <arges> hallyn: it fixes the issue. yea wondering why they don't have it
[23:13] <dannf> hallyn: do you mind if i upload this to y? https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1566564/comments/4
[23:15] <cjwatson> s390x builders will be going down in about 25 minutes for maintenance; expected downtime about an hour