=== Ukikie_ is now known as Ukikie | ||
-queuebot:#ubuntu-release- New: accepted linux-signed-lts-xenial [amd64] (trusty-proposed) [4.4.0-65.86~14.04.1] | 06:46 | |
-queuebot:#ubuntu-release- Unapproved: binutils (yakkety-proposed/main) [2.27-8ubuntu2 => 2.27-8ubuntu2.1] (core) | 07:45 | |
-queuebot:#ubuntu-release- New: accepted uwsgi [amd64] (zesty-proposed) [2.0.14+20170111-4ubuntu1] | 07:52 | |
-queuebot:#ubuntu-release- New: accepted uwsgi [ppc64el] (zesty-proposed) [2.0.14+20170111-4ubuntu1] | 07:52 | |
-queuebot:#ubuntu-release- New: accepted uwsgi [i386] (zesty-proposed) [2.0.14+20170111-4ubuntu1] | 07:52 | |
-queuebot:#ubuntu-release- New: accepted uwsgi [s390x] (zesty-proposed) [2.0.14+20170111-4ubuntu1] | 07:52 | |
=== caribou_ is now known as caribou | ||
-queuebot:#ubuntu-release- Unapproved: sssd (trusty-proposed/main) [1.11.8-0ubuntu0.4 => 1.11.8-0ubuntu0.5] (ubuntu-desktop) | 09:21 | |
caribou | This upload ^^ is meant to fix the FTBS which is holding -ubuntu0.4 in trusty-proposed. Anything that needs to be done to the 0.4 or the new upload will just override it ? | 09:24 |
---|---|---|
-queuebot:#ubuntu-release- Unapproved: coreutils (xenial-proposed/main) [8.25-2ubuntu2 => 8.25-2ubuntu3~16.04] (core) | 09:25 | |
-queuebot:#ubuntu-release- Unapproved: coreutils (yakkety-proposed/main) [8.25-2ubuntu2 => 8.25-2ubuntu3~16.10] (core) | 09:26 | |
-queuebot:#ubuntu-release- Unapproved: simple-image-reducer (xenial-proposed/universe) [1.0.2-3 => 1.0.2-3ubuntu0.16.04.1] (no packageset) | 11:10 | |
-queuebot:#ubuntu-release- Unapproved: exiv2 (xenial-proposed/main) [0.25-2.1 => 0.25-2.1ubuntu16.04.1] (kubuntu, ubuntu-desktop) | 11:25 | |
-queuebot:#ubuntu-release- Unapproved: gnome-weather (xenial-proposed/universe) [3.18.1-1 => 3.18.1-1ubuntu1.16.04.1] (desktop-extra, ubuntugnome) | 11:44 | |
-queuebot:#ubuntu-release- Unapproved: imagej (xenial-proposed/universe) [1.50d+dfsg-1 => 1.50d+dfsg-1ubuntu1.16.04.1] (no packageset) | 11:54 | |
=== tedg_ is now known as tedg | ||
nacc | i wonder if i could get some AA help to remove two src packages from zesty? LP: #1667834 to remove src:php7.1 and LP: #1666566 to remove src:postgresql-9.6. Both are transitions we will pursue in z+1, but we don't want the former to confuse users and we don't want the latter to block transitions. | 16:15 |
ubot5 | Launchpad bug 1667834 in php7.1 (Ubuntu) "[FFe] Please remove php7.1 from zesty" [Undecided,New] https://launchpad.net/bugs/1667834 | 16:15 |
ubot5 | Launchpad bug 1666566 in postgresql-9.6 (Ubuntu) "Please remove postgresql-9.6 from zesty-proposed" [Undecided,New] https://launchpad.net/bugs/1666566 | 16:15 |
jbicha | nacc: wouldn't it be easier to just finish the pg 9.6 transition? | 16:16 |
nacc | jbicha: no, it hasn't started yet | 16:17 |
nacc | jbicha: it's 100x easier to remove it :) already discussed with pitti as well | 16:17 |
jbicha | nacc: actually, it's almost done | 16:17 |
jbicha | if you look at https://people.canonical.com/~ubuntu-archive/proposed-migration/zesty/update_excuses.html | 16:18 |
jbicha | postgresql-plproxy's autopkgtests should be ignored since they almost always fail | 16:18 |
nacc | jbicha: not in the ubuntu packages afaict | 16:18 |
nacc | jbicha: we have to rebuild a number of packages | 16:18 |
nacc | jbicha: to pick up new deps | 16:18 |
nacc | e.g., libpq5 -> libpq6 | 16:18 |
mapreri | considering debian stretch has pg 9.6 it seems weird to me that zesty still keeps 9.5 | 16:19 |
nacc | mapreri: time and a transition of maintainership in Ubuntu | 16:19 |
mapreri | nacc: "transition of maintainership"? | 16:19 |
nacc | mapreri: pitti was doing the work, now cpaelzer and I are | 16:20 |
mapreri | ok, but still… | 16:20 |
jbicha | the only 2 pkgs that need more investigation are the autopkgtests for postgresql-filedump and postgresql-multicorn | 16:20 |
jbicha | all the pg9.6 rebuilds were done several weeks ago | 16:20 |
mapreri | and rebuilding packages is not so time-expensive. | 16:20 |
nacc | jbicha: `reverse-depends src:postgresql-9.5` implies otehrwise | 16:21 |
nacc | mapreri: look, i'm not arguing about it. If you want to do it, do it. | 16:21 |
jbicha | nacc: I think that sees all the stuff in zesty release too | 16:21 |
jbicha | if you look at excuses you'll see that there are a lot of packages in zesty that have already been rebuilt | 16:21 |
jbicha | and if you look at https://people.canonical.com/~ubuntu-archive/proposed-migration/zesty/update_output_notest.txt | 16:22 |
jbicha | you'll see that pg9.6 is ready for transition once the few autopkgtests are taken care of | 16:22 |
nacc | jbicha: yes, but we also want to transition pg9.5 out | 16:22 |
nacc | jbicha: which takes time | 16:23 |
jbicha | (look for apparently successful" | 16:23 |
cpaelzer | and extra "benefit" less active versions in the field to maintain | 16:24 |
nacc | jbicha: e.g, quick glance shows zabbix-proxy-pgsql is dependent on pg9.5 | 16:24 |
nacc | jbicha: it just means more work, to verify they all get fixed/work | 16:24 |
jbicha | I did most of those rebuilds | 16:25 |
cpaelzer | nacc: we might forward the mail that we had with pitti to ubuntu-release ML if that would help to provide some reasoning | 16:25 |
jbicha | it looks to me like it'll be more work to try to undo the transition when it's what 90+% done | 16:25 |
nacc | jbicha: why? it's not publisehd yet | 16:25 |
nacc | jbicha: in the release pocket | 16:25 |
nacc | jbicha: so nothing can have migrated in that depends explicilty on postgresql-9.6 | 16:26 |
jbicha | nacc: while I advise against it, there are people that run zesty-proposed before release | 16:26 |
nacc | afaik, that's not something that needs to be supported | 16:26 |
nacc | inasmuch as it's a known-broken world | 16:26 |
nacc | things can get removed from z-p, and changed, etc. | 16:27 |
jbicha | I'm frustrated because I spent time doing the rebuilds and rerunning autopkgtests to get it down to maybe 3 autopkgtests that need to be ignored | 16:29 |
nacc | jbicha: but why aren't they ignored yet, then? | 16:30 |
jbicha | after talking with pitti in December | 16:30 |
nacc | jbicha: i'm sorry, pitti didn't mention it at all to us | 16:30 |
cpaelzer | yeah sorry++ ^^ | 16:31 |
nacc | jbicha: if you're going to vouch for the ignoring of the postgresql-plproxy tests, please update that bug (I think) with that comment | 16:31 |
jbicha | pitti seemed to want to stay with 9.5 for zesty but autosync or whatever was causing problems so 9.6 was let on in | 16:31 |
nacc | cpaelzer: presuming the above --^, are you ok with the transition? it's late, but it's already in z-p, might just flow through | 16:31 |
cpaelzer | nacc: if we both can at least dedicate 1-2 days this week to make sure it does - I'm ok | 16:32 |
nacc | cpaelzer: yeah, i have the cycles, at least | 16:32 |
jbicha | http://autopkgtest.ubuntu.com/packages/p/postgresql-plproxy/zesty/amd64 | 16:32 |
cpaelzer | I can shuffle around things this week, but very likely are locked up next 2 | 16:32 |
nacc | cpaelzer: and once it does, i can start working on removing src:postgresql-9.5 | 16:32 |
jbicha | I don't know about the other 2 pkgs though, postgresql-filedump and postgresql-multicorn | 16:33 |
nacc | jbicha: i'm not the one to convince :/ | 16:33 |
nacc | jbicha: you need to get the release team to ignore it, afaict | 16:33 |
nacc | jbicha: this is the error? | 16:34 |
nacc | Error: port 5432 is already used by cluster 9.6/main | 16:34 |
nacc | BUG: Cluster 9.6/regress was not created by me (PGSYSCONFDIR=/etc/postgresql-common, PG_CLUSTER_CONF_ROOT=) | 16:34 |
nacc | https://ci.debian.net/packages/p/postgresql-plproxy/unstable/amd64/ | 16:35 |
nacc | would be good to fix deiban | 16:35 |
nacc | *debian | 16:35 |
jbicha | nacc: that was already broken with 9.5 on yakkety: http://autopkgtest.ubuntu.com/packages/p/postgresql-plproxy/yakkety/amd64 | 16:36 |
jbicha | good to fix but not a new regression | 16:36 |
nacc | never said it was -- just feels like it's not something we should just ignore, if we can fix it | 16:37 |
nacc | i would much rather be sure we aren't breaking plproxy | 16:37 |
cpaelzer | nacc: since I can't predict how far you get today, would you EOD your day leving me a breadcrumb mail where to take over analyzing tests or whatever? | 16:37 |
nacc | cpaelzer: ack | 16:37 |
cpaelzer | nacc: and I'd like to track somewhere - should we refurbish the bug to drop to the "kill remaining blockers" | 16:38 |
nacc | cpaelzer: yeah, i'll update that bug regardless | 16:38 |
cpaelzer | starting with jbicha stating why we won't drop but go on as discussed before | 16:38 |
nacc | sbeattie: do we want to transition src:mariadb-10.0 out eventually? is someone working on that / bug filed? it seems like it's revdeps are all to the metapackages (and are none versioned)? | 16:40 |
powersj | slangasek: can I get you (or tell me who should) review: https://code.launchpad.net/~powersj/ubuntu-cdimage/server-size-limit/+merge/317648 | 16:50 |
-queuebot:#ubuntu-release- Unapproved: debian-installer (yakkety-proposed/main) [20101020ubuntu483.1 => 20101020ubuntu483.2] (core) | 16:51 | |
Laney | Looks to me like postgresql-filedump is putting files in 9.5/ and not 9.6/ which is breaking things | 16:51 |
Laney | I should think a no-change rebuild will fix that one | 16:52 |
slangasek | powersj: laney, infinity, sil2100 are also good for a merge review on this kind of thing; but I can look at this | 16:53 |
slangasek | powersj: you intend to retroactively raise the limit for all LTS series, correct? | 16:53 |
powersj | slangasek: yes | 16:54 |
jbicha | ok, I'll try a rebuild of postgresql-filedump | 16:55 |
slangasek | powersj: ok. FWIW I'm going to move this up under one of the other cases that are already sized to 2^30, for legibility | 16:55 |
powersj | slangasek: ok | 16:55 |
powersj | thanks for looking at it | 16:56 |
nacc | Laney: good catch | 16:58 |
slangasek | powersj: one thing I will note is that by raising the limit for all releases, if the image does grow in the development series, you won't get any early warning if it gets big enough that a point release w/ hwe kernel is going to put it over the new limit | 16:58 |
slangasek | powersj: but that's not a reason for me not to merge this, just something for you to think about | 16:58 |
Laney | nacc: the other ones look more difficult ;-) | 16:58 |
Laney | but "oh they've failed for a while, let's skip them" doesn't leave me with a good feeling | 16:59 |
Laney | in general I'd like to see some analysis | 16:59 |
nacc | Laney: right, i'm importing plproxy to try and fix it | 16:59 |
Laney | even if that's "this is clearly not a problem due to X Y and Z" | 16:59 |
Laney | something more than "that's a lot of red crosses, must just be a crappy test" | 16:59 |
Laney | :) | 16:59 |
nacc | Laney: ack, I think I will use the removal bug to help track the various affected pacakges | 17:00 |
powersj | slangasek: I agree, however the daily email telling me is something that is no getting ignored as no action can be done. I do think monitoring the dev release somehow makes sense so we don't get into panic mode. | 17:00 |
nacc | jbicha: looks like multicorn's regression is from sqlalchemy | 17:00 |
Laney | nacc: ack, thanks! | 17:00 |
infinity | slangasek: I think that could do with a dist comparison to <= xenial, for that very reason. | 17:00 |
infinity | slangasek: If 18.04 needs to grow for the release version, we can examine that at the time, but if it only needs to grow for the point release, the knob to twiddle at the time is obvious. | 17:01 |
infinity | If we let server goldfish to 1G, then the 18.04 point release (if we continue this dual-stack thing) will obviously be >> 1G, which is less than ideal. | 17:02 |
powersj | slangasek: if you want to tell me where to move those lines I can resubmit with the dist check. infinity proposal makes sense and solves the problem. | 17:03 |
slangasek | powersj: I would just do something like lines 1723-1726, but fall through all the way to line 1780 for the devel series case | 17:06 |
powersj | slangasek: ok will make change | 17:07 |
powersj | slangasek: care if I limit it only to xenial then (e.g. == xenial)? | 17:10 |
slangasek | powersj: fine with me, I can argue myself into thinking either way is correct | 17:11 |
powersj | slangasek: updated | 17:14 |
slangasek | powersj: I think I'm going to get a linting error for the length of the line (yep!) - ./run-tests is the pre-commit test suite; I'll fix up here and commit | 17:17 |
powersj | slangasek: doh... sorry | 17:19 |
slangasek | powersj: no worries - merged | 17:19 |
powersj | slangasek, infinity thanks again | 17:19 |
infinity | rbasak: https://lists.ubuntu.com/archives/technical-board/2017-February/002284.html | 17:22 |
infinity | rbasak: ^-- You can't manipulate the NEW queue without ~ubuntu-archive, so the question is somewhat moot. | 17:23 |
rbasak | infinity: ah. Some people thought we could when for a stable release. | 17:23 |
rbasak | But yeah. If we can't, then it's moot. | 17:23 |
infinity | You definitely can't. | 17:23 |
rbasak | I asked the question because it felt like we were being bogged down by talking about it instead of JFDI. | 17:23 |
infinity | You can certainly review, say "yeah, this is a straight backport that doesn't require AA review, please accept" and turn one of us into a monkey. But then we still have to trust you know how debdiff works. | 17:24 |
jbicha | but there are SRU Team members of ~ubuntu-archive who felt they couldn't process new packages because they weren't full AAs | 17:24 |
infinity | (We probably trust that) | 17:24 |
slangasek | infinity, rbasak: procedurally, my position is that I'm happy for any member of the SRU team to do the actual review and use me as an AA button pusher to accept | 17:24 |
Laney | I think you can | 17:25 |
apw | it seems reasonable to make recomendations. | 17:25 |
infinity | jbicha: Yes, there are some who are AAs only for broken permissions reasons (mostly kernel updates). | 17:25 |
Laney | I think I tried this once when sitting with apw | 17:25 |
Laney | As ~ubuntu-release admittedly | 17:25 |
rbasak | infinity: it would still probably be useful to state "~ubuntu-archive still needs to push the button due to the technicality, but in principle yes" | 17:26 |
rbasak | Because then any ~ubuntu-archive then knows how to respond to a request. | 17:26 |
infinity | rbasak: Well, to be fair, it's less about policy and more about personal trust. Much like I'd sign your GPG key without a passport because I know you, that's not a policy everyone will have. :P | 17:27 |
rbasak | OK that's fair. | 17:28 |
infinity | rbasak: The technicality is there for a reason. NEW *should* have a review. But in practice, most AAs should trust that the SRU member in question isn't a derp and isn't lying about their review. | 17:28 |
apw | as we should never have them, i like the second set of eyes that implies | 17:30 |
* rbasak notes that infinity doesn't appear to have signed his key :-) | 17:30 | |
* tsimonq2 -> moves here | 17:30 | |
tsimonq2 | infinity: I think I might want to stand my ground on not installing recommends for now, it's part of what makes Lubuntu lightweight, but I'm open to change if you have some stats for me to look at. ;) | 17:31 |
infinity | tsimonq2: It wasn't done to be "lightweight", it was done because of accidentally pulling in half of ubuntu or GNOME, if I recall. :P | 17:32 |
infinity | tsimonq2: MATE had the same issues, but managed to fix it in packages (sadly, after xenial, but better late than never) | 17:32 |
tsimonq2 | infinity (cc flexiondotorg): I would be interested to see how MATE fixed this issue. | 17:33 |
slangasek | yes, I would expect any new package going into a stable release to be reviewed against the standard of "is this package already in devel? if not why not? are there deltas from the devel version? if so, why?" | 17:33 |
tsimonq2 | infinity: Also, why is it painful for us to do this? | 17:33 |
tsimonq2 | infinity: Does it cause some unnecessary overhead for you archive folks? | 17:34 |
slangasek | tsimonq2: because the set of things that are included in recommends down the stack from you is quite extensive | 17:34 |
* Laney is uploading a package to NEW to test if he can reject it | 17:34 | |
infinity | tsimonq2: There's a bunch of special-casing in image build infra and such for no-follow-recommends, and it kinda sometimes works, and kinda sometimes doesn't. | 17:34 |
slangasek | that too | 17:34 |
slangasek | both infra overhead, and the unexpected behavior | 17:35 |
infinity | Plus, I sometimes have to fix bugs in lubuntu found because we add a recommends that you *don't* pick up, but really should. | 17:35 |
infinity | (The obvious one I can think of is thermald) | 17:35 |
tsimonq2 | infinity: That makes sense. | 17:35 |
infinity | But I think you also recently saw pain because of xorg-input-synaptics. | 17:36 |
infinity | Same reason. | 17:36 |
tsimonq2 | And I can see why it was done in the first place too, but like I said, I'm open to change. :) | 17:36 |
infinity | Recommends was meant to make sure it was on installation media, but not *required* post-install, no-follow-recommends breaks that assumption. | 17:36 |
wxl | historically, we've had issues trying to keep the installs cd sized but i think we can all agree we've given up on that one :) | 17:37 |
tsimonq2 | infinity, slangasek: Any way we could do some sort of testing with removing no-follow-recommends? I'd like to look into the stats on that and see if it's worth doing. | 17:37 |
tsimonq2 | wxl: Pretty much. ;) | 17:37 |
infinity | tsimonq2: I can help you do that in a PPA with some special builds next week. | 17:38 |
infinity | tsimonq2: THough a first step would be to remove it from a local branch of your seeds, update your meta against that local branch, and see how scary the result is. | 17:38 |
infinity | tsimonq2: The first pass will be... Rough. You'll probably end up with half of unity and half of GNOME, and all of it will be redundant stuff you don't want. :P | 17:39 |
Laney | Yeah, I rejected it from zesty NEW and it worked | 17:39 |
tsimonq2 | infinity: So then I go through and blacklist everything? XD | 17:39 |
tsimonq2 | infinity: No, but seriously, from there, I just blacklist some of the super high level things? | 17:40 |
infinity | tsimonq2: No. Then you start proposing fixes to packaging, ie: when something needs indicator-whosit, you might need to add some alternate deps, etc. flexiondotorg might have helpful pointers there, but no point asking him until you have an idea of the damage. | 17:40 |
tsimonq2 | infinity: Ok, I see. | 17:40 |
infinity | tsimonq2: Anyhow, the ongoing pain of no-follow-recommends is something I've lived with for years, so if you don't fix it until AFTER LXQt is a thing, that's fine by me. | 17:41 |
infinity | tsimonq2: But I figure if you're tearing out 200 packages and replacing them with 200 different ones, that's an ideal time to also look at conforming with how other flavours do things. | 17:41 |
infinity | tsimonq2: Wasting time fixing something that's going to be removed from the archive in 6 months is, well, wasting time. | 17:42 |
tsimonq2 | infinity: Alright, that makes sense. I'll think about it, and I'll want to involve lubuntu-devel etc. for sure once I have some stats to work with. | 17:43 |
-queuebot:#ubuntu-release- Unapproved: ktp-text-ui (xenial-proposed/universe) [4:15.12.3-0ubuntu1 => 4:15.12.3-0ubuntu2] (kubuntu) | 17:45 | |
tsimonq2 | infinity: About LXQt... at the moment I have a little tiny Launchpad team with a few PPAs and some hacky tooling on a VPS that makes some images. I plan on working on Lubuntu Next more, but is there something we can set up that will get Lubuntu Next images enabling those PPAs in the tooling with the rest of the flavors and the rest of our images? The problem right now is, I need to work on some va | 17:47 |
tsimonq2 | rious packages to get some major work done on these packages and test some things, but I don't like piling things up for patch pilots to the point of insanity, and I want to test things fairly quick. | 17:47 |
tsimonq2 | s/some various/some/ | 17:48 |
infinity | tsimonq2: We can make the images build using a PPA, yes. | 17:53 |
-queuebot:#ubuntu-release- Unapproved: mimedefang (xenial-proposed/universe) [2.78-1ubuntu1 => 2.78-1ubuntu1.1] (no packageset) | 17:54 | |
tsimonq2 | infinity: Is there something I need to propose a PR against, or can I just tell you what PPA? | 17:57 |
-queuebot:#ubuntu-release- Unapproved: debian-installer (trusty-proposed/main) [20101020ubuntu318.41 => 20101020ubuntu318.42] (core) | 17:57 | |
infinity | tsimonq2: We can just change the definition of the lubuntu-next livefs, or mangle it in crontab, but maybe ask me again in a few days when I have time to help you test that it's working after making the change? | 17:58 |
tsimonq2 | infinity: Sure, when are you available? | 18:00 |
infinity | tsimonq2: Likely later this week. Just not today. :P | 18:04 |
tsimonq2 | infinity: I'm available pretty much all tomorrow except for the early afternoon... :P | 18:06 |
-queuebot:#ubuntu-release- Unapproved: nagios-plugins-contrib (yakkety-proposed/universe) [16.20151226 => 16.20151226ubuntu0.16.10.1] (no packageset) | 18:21 | |
-queuebot:#ubuntu-release- Unapproved: nagios-plugins-contrib (xenial-proposed/universe) [16.20151226 => 16.20151226ubuntu0.16.04.1] (no packageset) | 18:22 | |
slangasek | I wonder if I'm the only one who cares that sha256sum should be able to spawn multiple threads in order to checksum files in parallel | 20:16 |
tsimonq2 | slangasek: That could be useful ;) | 20:22 |
=== Guest99451 is now known as santa_ | ||
xnox | slangasek, yes | 20:51 |
slangasek | xnox: yes I'm the only one or yes you'd find it useful? :) | 20:51 |
xnox | i do use gnu parallel with checksums | 20:51 |
tsimonq2 | xnox: But why not bake it in? :P | 20:51 |
xnox | but the output is ugly; as it effectively results in loads of not found and one correct message per spawn. | 20:52 |
xnox | slangasek, yes | 20:52 |
infinity | Surely most sha256sum calls are I/O bound, not CPU bound. | 20:52 |
infinity | Well, at least where I care about speed (*cough* nusakan's SAN *cough*) | 20:52 |
xnox | infinity, asyncio for the win; my NMVe drive has multiple I/O queues one can use | 20:52 |
tsimonq2 | infinity: What if you're doing a whole load of them, and you're verifying like 2000 files? | 20:53 |
tsimonq2 | I wouldn't be surprised if slangasek hits that usecase. | 20:53 |
tsimonq2 | (not like I know much about the internals of sha256sum :P) | 20:53 |
-queuebot:#ubuntu-release- Unapproved: accepted debian-installer [source] (yakkety-proposed) [20101020ubuntu483.2] | 20:54 | |
slangasek | xnox: right, so a 'sha256sum -j' that can launch a suitable number of threads | 20:54 |
tsimonq2 | xnox: But that doesn't work on HFS, does it? I thought that old Mac filesystem can only read OR write one file at a time? :P | 20:55 |
slangasek | and collate the results | 20:55 |
tsimonq2 | slangasek: So are you making that a thing? Then figuring out a way to make it work on HFS? XD | 20:55 |
slangasek | tsimonq2: I don't care at all about HFS, and I'm not committing to working on this, just gathering feedback to figure out if it might be worth it | 20:56 |
slangasek | right now I'm doing a number of local image builds, where the checksumming is a noticeable percentage of the build time | 20:56 |
tsimonq2 | slangasek: And I was joking (:P), but in all seriousness, I think it's a good idea to make things parallel that aren't already. | 20:56 |
tsimonq2 | infinity: What's the story behind Nusakan's SAN? | 20:57 |
tsimonq2 | (Nukasansan? Nuka sansan? :P) | 20:58 |
xnox | imho trivial parallesation like that should be the default and out of the box, even if it results higher total cpu time usage | 20:58 |
xnox | e.g. git and cmake do that | 20:59 |
infinity | Even if it results in higher real time as well, due to disk thrash? | 20:59 |
xnox | i'd even go for that. | 20:59 |
slangasek | xnox: if it's by default, then you should have a flag to disable it? | 20:59 |
xnox | sure. | 20:59 |
slangasek | I'd still rather make it opt-in with -j | 20:59 |
xnox | slangasek, how many times have you disabled git parallelism? | 21:00 |
-queuebot:#ubuntu-release- Unapproved: accepted debian-installer [source] (trusty-proposed) [20101020ubuntu318.42] | 21:00 | |
* xnox did it once server side, because it would hit OOM kill since it used to much RAM for repacking | 21:00 | |
infinity | My git usage doesn't grind my machine for three hours like I did to nusakan a couple of weeks ago with hashing. | 21:00 |
infinity | And in cases where it can (ie: dists snapshots on snakefruit), we absolutely tweak it to cope. | 21:01 |
slangasek | xnox: never, but my CPU thread to disk ratio is different than others' | 21:01 |
xnox | imho nusakan case - one should be tweaking it from -j4 -> -j0; or from -j0 -> -j4 anyway | 21:02 |
xnox | the sensible default imho i think should be optimised for the most likely case of nvme/ssd disks; and my current hypothesis is that parallel by default will win wall clock time most of the time. | 21:02 |
tsimonq2 | slangasek> I'd still rather make it opt-in with -j - +1 fwiw | 21:03 |
infinity | Sure, but I would contend that the "normal" use-case for hashing multiple files will be slower, wall-clock-wise, in parallel than serial. | 21:03 |
xnox | ack | 21:03 |
jgrimm | slangasek, the current draft for Curtin SRU Exception is at -> https://wiki.ubuntu.com/CurtinUpdates | 21:03 |
infinity | xnox: I don't think nvme/ssd is the "most likely case" for hashing large sets of files, that's where we disagree. ;) | 21:03 |
xnox | infinity, wouldn't in-sync RAID1 have faster read speed too? cause it's meant to balance and be able to read files off either drives? | 21:04 |
tsimonq2 | I think I have to agree with infinity, I'm still on a 1 TB HDD :P | 21:04 |
infinity | xnox: Yeah, but it does it in chunks, not per-file, so if you assume a reasonably unfragmented FS, you're still going to lose to thrash if you pull stripes from both disks for two files at once. | 21:05 |
xnox | =( | 21:05 |
xnox | FAKE NEWS | 21:05 |
infinity | Hahaha. | 21:05 |
* xnox giggles | 21:05 | |
infinity | And anything involving a network (a SAN, nbd, nfs, any number of network-backed VM storage solutions) will lose horribly when you try to stream multiple files at once. | 21:07 |
xnox | which is all of cloud | 21:07 |
* tsimonq2 thinks Juju something or other | 21:07 | |
infinity | Usually a non-issue for small files, which is why git's default parallelism is generally okay, but thrashing between several large files hurts. | 21:07 |
slangasek | so arguably, a good implementation would have a parent process that streams the files from disk one chunk at a time, then dispatches to a set of worker threads | 21:08 |
infinity | It might also just have arbitrary cutoffs for parallelism based on size of target files. | 21:08 |
xnox | slangasek, do we still pointlessly produce SHA1 checksums?! | 21:09 |
infinity | If you can slurp down 20 files quickly and then munch on them, you win, if you're abusing your I/O with 4 1G ISOs, you lose. | 21:09 |
slangasek | xnox: in some places, probably. | 21:10 |
xnox | nice that one can slurp a chunk and feed it to multiple checksum algos simultaniously; cause i guess we still need to produce multiple checksums | 21:10 |
xnox | http://unix.stackexchange.com/a/163797 | 21:10 |
xnox | cause i guess not reading everything 3 times over; beats anything | 21:11 |
infinity | Yes, reading each stream only once would be a greater win. Especially if one can spit out md5/sha1/sha256/sha512 in parallel. | 21:12 |
infinity | And that would definitely keep your CPU cores busy. | 21:12 |
xnox | should be easy enough to write with python3 and hashlib | 21:13 |
xnox | and include sha3 as well | 21:13 |
infinity | That sort of happens naturally (ish) if you do one file at a time, but only if you have enough RAM to keep the cache hot. | 21:13 |
infinity | (Well, not the parallel bit, but the "read once" bit) | 21:14 |
infinity | But yeah, operating on the single read with multiple hashes would be nice. | 21:14 |
infinity | We could also stop producing some of those hashes. | 21:14 |
xnox | so where is this supposed to happen? on nousakan? launchpad livebuilders? somewhere else? | 21:15 |
xnox | cdimage code? | 21:15 |
infinity | But I don't want to do that without being able to give clear instructions to users of all OSes on how to validate better hashes. | 21:15 |
tsimonq2 | Grrrrr, we've had a new sbuild in zesty-proposed for a while now, but I think it requires a new dpkg to pass through... | 21:17 |
tsimonq2 | And the only justification to having a new dpkg is to have a new sbuild. | 21:17 |
tsimonq2 | I don't think an FFe is justified, is it? :/ | 21:17 |
slangasek | xnox, infinity: the bit that's prompting me to look at it right now is entirely in livefs builds, nothing to do with nusakan | 21:17 |
slangasek | but obviously we create checksums there also | 21:18 |
infinity | tsimonq2: Filing an FFe for dpkg would get you nowhere, as I wouldn't let you upload it. :P | 21:18 |
tsimonq2 | infinity: Why not? :P | 21:19 |
tsimonq2 | infinity: And what's a solution to this problem then? | 21:19 |
infinity | tsimonq2: "Not worrying about it" would be a solution. | 21:19 |
infinity | Is the new sbuild somehow necessary? | 21:19 |
* tsimonq2 searches for conversations with lisandro about new sbuild fixes this one annoying bug... | 21:20 | |
infinity | (I'm looking at some dpkg cherry-picks this week, and perhaps that might allow me to relax the sbuild dependency, but if not, meh?) | 21:20 |
tsimonq2 | infinity: tl;dr: lisandro> but sbuild -sd should not try to install B-D-I | 21:21 |
tsimonq2 | infinity: Basically I was trying to build qtxmlpatterns-opensource-src 5.8.0 for Experimental on sbuild on my Zesty system, but I kept getting a stupid dep loop. | 21:22 |
tsimonq2 | infinity: He told me to install Debian because they have a newer sbuild, I told him no. :P | 21:23 |
infinity | Or don't use sbuild to build your source package. | 21:23 |
tsimonq2 | But it's the only thing I know how to use... :P | 21:23 |
infinity | To build your *source* package? | 21:23 |
infinity | I can count the number of times I needed a clean chroot to build a source package on one hand. | 21:24 |
infinity | (Obviously, I use sbuild to build binaries) | 21:24 |
tsimonq2 | If you can't do anything, then *shrug* but expect a bug report as soon as Zesty+1 opens for development ;) | 21:24 |
tsimonq2 | infinity: Binaries | 21:24 |
wxl | tsimonq2: it's called [y | 21:24 |
infinity | tsimonq2: But "-s" builds a source package. | 21:24 |
tsimonq2 | infinity: Ok so then yeah, I obviously need to talk to lisandro and re-look at this one more time, I can read manpages /o\ | 21:25 |
tsimonq2 | wxl: Hm? | 21:26 |
-queuebot:#ubuntu-release- Unapproved: php7.0 (xenial-proposed/main) [7.0.15-0ubuntu0.16.04.2 => 7.0.15-0ubuntu0.16.04.3] (kubuntu, ubuntu-desktop, ubuntu-server) | 21:26 | |
tsimonq2 | wxl: ^ why is php7.0 in the Kubuntu packageset? :P | 21:27 |
wxl | tsimonq2: the next character in ascii after z is [ | 21:27 |
-queuebot:#ubuntu-release- Unapproved: php7.0 (yakkety-proposed/main) [7.0.15-0ubuntu0.16.10.2 => 7.0.15-0ubuntu0.16.10.3] (kubuntu, ubuntu-desktop, ubuntu-server) | 21:27 | |
wxl | tsimonq2: and regarding php, don't ask me! | 21:27 |
nacc | mdeslaur: --^ fyi uploaded the fix, you'll want to take that into security too, i believe | 21:28 |
mdeslaur | thanks nacc | 21:28 |
wxl | oops i'm wrong. it's { | 21:28 |
tsimonq2 | infinity: Apologies | 21:28 |
tsimonq2 | wxl: Hahahahahahaha | 21:28 |
wxl | [ is the next after *Z* not *z* | 21:28 |
xnox | { | 21:32 |
xnox | ascii is weird | 21:33 |
tsimonq2 | wxl, xnox: Let's write a whole DE in Brainf*** :P | 21:33 |
xnox | brainf*** is too high level; it should run directly on z/VM CMS | 21:34 |
tsimonq2 | xnox: Hah :P | 21:35 |
nacc | jbicha: finally figured out a working setup | 21:44 |
nacc | jbicha: it was a half-baked change by Debian | 21:44 |
-queuebot:#ubuntu-release- Unapproved: cifs-utils (yakkety-proposed/main) [2:6.5-2ubuntu1 => 2:6.5-2ubuntu2] (desktop-core, ubuntu-server) | 22:00 | |
-queuebot:#ubuntu-release- Unapproved: cifs-utils (xenial-proposed/main) [2:6.4-1ubuntu1 => 2:6.4-1ubuntu1.1] (desktop-core, ubuntu-server) | 22:03 | |
bdmurray | Do any SRU team members have an opinion on adding uploader to the pending-sru report? It occurred to me I might not remember what I sponsored / uploaded and could / should verify. | 23:00 |
rbasak | Sounds reasonable. | 23:13 |
apw | bdmurray: sounds useful for poking people to verifh | 23:29 |
RAOF | Trevinho: It would be helpful if you left a comment as to what exactly you tested when marking bugs as verification-done, *particularly* when there are multiple packages being verified on the bug. | 23:40 |
Trevinho | RAOF: well... I've just redone what it's in the SRU test case in a clean install + proposed... For all the packages.. | 23:51 |
RAOF | ...and if you'd said that in a comment when you flipped it to verification-done, I wouldn't have pinged you :) | 23:51 |
Trevinho | RAOF: true... But I thought it was quite the norm :-) | 23:52 |
Trevinho | RAOF: but thanks for asking anyway | 23:52 |
RAOF | Hm, no? People usually say what/how they tested when marking verification-done. | 23:53 |
RAOF | And I usually ask if they didn't. :) | 23:53 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!