=== freeflying__ is now known as freeflying === semiosis_ is now known as semiosis === coreycb_ is now known as coreycb === balkamos_ is now known as balkamos === plars_ is now known as plars === michagogo_ is now known as michagogo === pitti is now known as Guest63419 === Guest63419 is now known as pitti === pitti is now known as Guest55341 === Guest55341 is now known as pitti === maclin1 is now known as maclin === dupondje_ is now known as dupondje [08:57] Laney: the systemd debs from that PR test fine locally, I can't reproduce the eternal hang; if I start a few tests manually with some changes, could I bother you with killing them again? [08:58] Laney: the log included in the tarball was just a random interruption, it didn't show the actual tmpfail; so I would turn off the reboot smoke test at first and see how far that gets [09:04] jamespage, I'm doing a sync of glusterfs [09:08] pitti: sure [09:08] can you please double check the systemd files? [09:08] I wonder why this isn't reproducible elsewhere though [09:08] given that it's extremely consistent in scalingstack [09:11] Laney: ok, thanks; I started with disabling "upstream" and "boot-smoke" tests, to establish a base line (https://github.com/systemd/systemd/pull/6392) [09:12] pitti: you're triggering these manually, right? [09:12] :) [09:12] Laney: queues have drained, did lgw01 came back and went though them all, or was some more surgery involved? :-) [09:12] it came back [09:12] Laney: yes, I won't add them to webhooks for now, just selected manual tests on a packaging branch [09:12] they just had to turn it (nova-compute) off and on again [09:12] haha [09:12] It'll hopefully be redeployed soonish, so it'll be more reliable. [09:13] I'm this evening bringing up staging s390x vbuilders on bos02, which is deployed in a slightly more sensible way that will eventually be rolled out everywhere. [09:13] icehouse is getting a bit old. [09:26] I've been hearing rumours about that === maclin1 is now known as maclin [10:09] cpaelzer, pleeeeease can you merge multipath-tools? I would like, but I failed understanding some changes [10:28] Laney: just 6 amd64 tests running in parallel, still cloud woes? [10:30] pitti: quota; things backed off [10:38] LocutusOfBorg: hehe [10:38] LocutusOfBorg: I was waiting on cyphermox, but you know what if time permits I can prepare a merge and hand it to both of you for a review [10:38] I'm PTO soon, so I'd not like to upload before leaving for a while [10:39] will see If I can squeeze some of that in this week and let you know LocutusOfBorg, cyphermox [10:39] thanks [10:40] it should be a matter of grab-merging it, the merge failures should be trivial for a person who have done it previously [10:40] but I don't want to touch it === ahasenac` is now known as ahasenack === ahasenack is now known as Guest49405 [12:56] LocutusOfBorg: I have the merge complete, was a bit ugly since some debian changes conflicted with ours but I think I was able to resolve [12:57] LocutusOfBorg: will build and test now, once that is done I'll throw at you and cyphermox for review and upload [13:06] just ping and I'll have a look, lovely thanks! === JanC is now known as Guest76357 === JanC_ is now known as JanC [13:32] LocutusOfBorg: btw I had already done liburcu rebuilds except for netsniff-ng [13:33] so liburcu could have migrated now but it has to wait for the autopkgtests to be redone :| [13:52] jbicha, sorry, for some reasons they were listed in the bad list [13:59] what bad list? === persia__ is now known as persia === zyga-ubu1tu is now known as zyga-ubuntu [14:14] I crafted a list of stuff to rebuild, probably in some wrong way [14:17] LocutusOfBorg: https://people.canonical.com/~ubuntu-archive/proposed-migration/artful/update_output.txt and update_output_notest.txt provide a sort of tracker if you can read it [14:18] the notest would have done the trick probably, thanks! [14:18] I still prefer rdepends but I fail sometimes because I do it against artful, never found a way to make it look for artful-proposed [14:18] I think the autopkgtests were good by now though [14:19] not by this morning :) [14:20] not any more, but they were good except for netsniff-ng [14:33] LocutusOfBorg: cyphermox: I set you both as reviewer on the multipath-tools merge, should be in your inbox from LP [14:35] I prefer cyphermox and to wait some bits [14:35] to make liburcu migrate [14:36] As I said I'm soon away a bit, so time will pass :-) [14:39] cpaelzer, debian/tests/control gone? [14:41] from the delta maybe, should be all in debian now [14:42] I upstreamed all we had to Debian so we can drop delta [14:42] it is still there, just checked [14:42] I see it has been upstreamed differently but ok [15:01] apt 1.4.7 just entered that apt ppa for zesty (https://launchpad.net/~deity/+archive/ubuntu/apt). I rebuild the changes file for zesty, but kept the .dsc and .tar.xz unchanged from the debian stretch update. For the SRU (bug 1702326), we can either pick the update like this, change the distribution info in debian/changelog, or change distro + version to something like 1.4.7+16.10.1. [15:01] bug 1702326 in apt (Ubuntu Zesty) "New microrelease 1.4.7 for zesty" [Medium,Triaged] https://launchpad.net/bugs/1702326 [15:02] I can also update the SRU bug then to handle any other stuff that might have been forgotten in the 1.4.{1,2,3,4,5,6} SRUs [15:03] Not sure if anyone else has been sharing a branch between a debian stable and an ubuntu stable release yet [15:04] Optimally, I'd syncpackage it but (a) Launchpad did not pick it up from stretch-proposed, and (b) no diffs for SRU reviewers :( [15:04] So currently, it's more of a fake sync [15:27] cpaelzer: I review multipath-tools; feel free [15:28] I'll do another run of testing with everything together (ie. from d-i) once it's in the archive. [15:32] thanks cyphermox [15:32] I'll address whatever review comes up tmrw morning and if nothing blocking I'll upload [15:33] e.g. a few of nacc's automated nack's don't count on this, but I want to check them still [15:33] thanks for the offer on the extra round of testing [15:49] grr. i want to close *all* segv and friend bugs in apt. [15:58] stgraber, kees, infinity, mdeslaur: TB meeting in 2? === Guest49405 is now known as ahasenack [15:58] ack [15:58] (if stgraber is around to chair, I guess I should have checked with him when I was face-to-face with him 10 minutes ago :P) [16:25] slangasek: http://paste.ubuntu.com/25119707/ [16:26] slangasek: downloads the build chroot from LP, turns that into a LXD image and creates a container from it. If the image already exists, just re-uses it (unless -u is passed to force an update). It repacks the build chroot to rename the directory that's in it and add the needed bit of metadata for LXD. Seems to work okay here. [16:29] cyphermox, so you take care of multipath upload? thanks [16:34] juliank: Do you know of something left to do in Launchpad for bug 1417717? https://translations.launchpad.net/ubuntu/artful/+source/apt/+pots/apt-utils looks fine to me now (especially by comparison to vivid). [16:34] bug 1417717 in Ubuntu Translations "apt-utils marked for translate, but data unavailable" [High,Confirmed] https://launchpad.net/bugs/1417717 [16:37] stgraber: excellent, thanks [16:57] cjwatson: I don't know, I don't see when the files where last imported [16:59] All I see in the queues are "Needs Review" [16:59] The last time I saw something different (last year?) they failed IIRC [17:00] juliank: OK, that's not a Launchpad problem, that's up to translations admins to garden the import queue if appropriate [17:01] I just wonder where the failed ones went [17:01] juliank: And I think the import queue is mostly obj-*/**/*.pot which is probably boring ... [17:02] juliank: Do you know if there have in fact been failures in artful? [17:02] No [17:02] It was back in xenial days [17:02] In any case apt does not use the translations yet AFAIUI, but if they are at least working on the translate side, that's good. [17:03] Gotta add some langpack hack into apt add some point, so it prefers langpack ones over its own files [17:03] juliank: The way the imports are apparently per-architecture due to these obj-* directories is pretty suboptimal. Is there a way to avoid that? [17:04] Well, there are always ways to make that different, but that's the build directory debhelper picks, and I build the pot files in the build directory [17:04] It would be enough to just try the import on amd64 and ignore the others or something [17:05] I think that's the same with most packages really, or does any build arch-specific po(t) files? [17:05] Most packages ship them in source tarballs [17:06] Though they do often get updated [17:06] (usually in place though, not in obj-* directories) [17:06] Anyway, we can just import them all I imagine [17:07] Let me see what I can do [17:07] Well, we combine and split the files, as we have one domain to translate really (apt-all), but several target domains (apt-utils apt-doc, etc) [17:08] We only ship the apt-all .pot and .po files in the source package, as we want them to be translated as one, as they can share strings [17:08] There was the idea to just move to one "apt" translation domain and ship them in apt-common or something [17:09] That's about 3 MB of translations [17:10] juliank: I'm a bit confused by the separate .sh.pot and .c.pot templates - are those intentional? [17:10] cjwatson: They are the results of running xgettext against sh and the c++ files, and merged into the normal .pot [17:11] juliank: I think it might make things less confusing if you removed those temporary files ... [17:12] There's a lot of weird caching and stuff in the build system going on [17:13] juliank: At least before dh_builddeb runs [17:13] cjwatson: I guess it's easier to just rename them .c-pot and .sh-pot or something :) [17:13] That would work too, yes [17:16] I'll block them for now [17:20] cjwatson: I'm really thinking about just dropping the split translation domains and stuffing it into apt-common or something, that reduces .mo space requirements from 5.7 to 2.7 MiB on a system with both apt and apt-utils installed; and fixes all issues with launchpad imports [17:21] Then it just gets po/apt.pot and po/*.po :) [17:25] Forget the 5.7 quote, it's more like half of it (2.8) [17:28] xnox: If net-update is really disabled everywhere now, I'd prefer dropping the code from apt-key. I was just triaging bugs :) [17:29] If we don't really want it anymore, let's just kill it :) [17:29] If anyone here is interested in merging all apt translations into apt-common: The sizes are: just apt: 2374622; with apt-utils: 2930575, combined: 2806311 [17:30] That is +0.5 MB for a tiny chroot without apt-utils, and -0.1MB for a normal system with both [17:30] juliank: I've done the requisite clickyclicky now [17:31] cjwatson: Thanks [17:34] I guess the code for making langpacks work is to just call bindtextdomain("/usr/share/locale-langpack") under some condition (as in: an apt translation exists in there). Preferably I'd like to try both, but unfortunately it does not seem gettext allows that [17:35] And well, do the binding fun in library init functions [17:35] or in InitConfig or something [17:39] The alternative would be to have an "apt-langpack" domain that the langpack uses and then just check apt-langpack first and fall back to the normal domain if that fails [17:39] (per-msgid fallback) [17:40] The hacky solution is probably to just decide based on which .mo file is larger [18:23] pdeee: I'm working on the SRU exception documentation. Thank you for the docs you had prepared in relation to this. I'm going for documenting a full standing exception based on that documentation, so we don't need to detail every change on every update. My draft (work in progress) is at https://wiki.ubuntu.com/StableReleaseUpdates/Certbot. Can you suggest anything specific that we could use as a [18:23] process to make sure that a proposed update (once built and available to users) is good, which we can ensure is done before pushing the update out to users? [18:23] bmw: ^ [18:45] cjwatson: Re bug 1123272 - I think we could potentially improve apt downloading speed by 50% by switching from MD5+SHA1+SHA256 to MD5+SHA512 (as SHA512 is 50% faster than SHA256, and we drop one hash, though md5 would be nicer to drop) [18:45] bug 1123272 in apt (Ubuntu) "high cpu usage on download (not optimised SHA256, SHA512, SHA1)" [Wishlist,Confirmed] https://launchpad.net/bugs/1123272 [18:46] Because apt validates them all, that seems to be a bottleneck for high-bandwidth, slow-cpu users [18:47] things might look different on 32-bit system, no idea [19:47] grr, people in 1123272 are annoying [19:47] in bug 1123272 [19:47] "APT downloads have CPU bottleneck, everything is fine with CURL" [19:47] bug 1123272 in apt (Ubuntu) "high cpu usage on download (not optimised SHA256, SHA512, SHA1)" [Wishlist,Confirmed] https://launchpad.net/bugs/1123272 [19:47] lol [19:47] Seriously, we run 3 hashes on the input, and curl does like nothing. [19:47] Will things improve if we run verification and download in parallel? [19:52] juliank: Pretty sure the slowdown I hit might be re-compressing. [19:52] *might be*, I don't know. [19:52] Unit193: Well, that only happens with pdiffs or acquire::gzipindexes [19:53] and achieves about 200 MB/s or so [19:53] or was it 1GB/s? [19:53] On your hardware* [19:53] ehm, 2 GB/s [19:54] Yes, but it's a 5 year old midrange laptop CPU [20:37] juliank: Or we could only check the strongest. There's not much point in checking multiple hashes [20:54] cjwatson: Well, yeah, mostly, they are all MD constructions anyway [20:55] But if we add something like SHA3 or BLAKE2b, we probably do want to run all available ones (one per family) [20:55] you'll never please security nuts [20:56] We can at least make checking of untrusted hashes optional [20:56] yeah no point in burning cpu time on md5 [20:57] sarnold: Teeechnically since md5 and sha1 are broken, if you check both then it's "safe enough" :> [20:57] Well, SHA1 is also marked untrusted [20:57] But then it also checks file size via the same mechanism [20:57] Unit193: yeah I haven't seen any polyglot sha1 and md5s. yet. [20:57] So you need to factor in cost or special case filesize [20:58] Optimally, right now, we'd use SHA-512 on 64-bit, and SHA-256 on 32-bit platforms [20:59] As SHA-512 is about 50% faster than 256 on 64-bit, or even more [21:00] sarnold: BTW, seen anything in security team about irssi or gnome-exe-thumbnailer? [21:00] Unit193: I've seen the gnome-exe-thumbnailer bug, dunno if it's had any progress yet [21:01] juliank: yeah; or maybe blake2*mumble* everywhere? [21:01] sarnold: Was just uploaded to Debian, has CVE-2017-11421 now. I poked md a few days ago about Irssi, as I'd already done themerge. [21:01] gnome-exe-thumbnailer before 0.9.5 is prone to a VBScript Injection when generating thumbnails for MSI files, aka the "Bad Taste" issue. There is a local attack if the victim uses the GNOME Files file manager, and navigates to a directory containing a .msi file with VBScript code in its filename. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11421) [21:02] sarnold: Well, I'd like to couple blake2 with a sha2, to have something very-well-researched in there [21:02] And BLAKE2b is fast on 64-bit only, on 32-bit you'd want BLAKE2s [21:02] hi, could someone please accept my nominations for bug #1698758 ? [21:02] bug 1698758 in libapache2-mod-auth-pgsql (Debian) "Encrypted password causes segmentation fault" [Unknown,New] https://launchpad.net/bugs/1698758 [21:03] So ship SHA256+SHA512+B2s+B2b? [21:03] juliank: oh there's an idea that never occurred to me [21:03] four hashes but check the fastest two on each platform? heh [21:03] and then pick (SHA512, B2b) or (SHA256, B2s) depending on whether 64 or 32 bit [21:04] Unfortunate that b2sum only supports one method. [21:05] Yep [21:05] The coreutils variant was stripped down [21:05] The initial one supports all variants [21:07] B2b + SHA512 should run in like 5.3 seconds or so here [21:07] for 1GB [21:08] Like 20% slower than SHA256 on its own [21:22] juliank: What, other than b2sum and openssl 1.1, does blake/keccak? [21:22] sarnold: polyglot sha1/md5 was demonstrated something like ten years ago [21:23] more, I think [21:23] sarnold: it's about six bits harder than sha1 alone [21:23] cjwatson: sorry, I meant _collided_ sha-1/md5 polyglots.. [21:26] sarnold: that's the Joux multicollisions result, surely? [21:27] cjwatson: let me go read! :D [21:27] sarnold: I admit I don't know if there've been *specific* collisions demonstrated, only the theory [21:28] Unit193: librsync, argon2 algorithm, the noise protocol, amongst others, use blake2 [21:28] See https://blake2.net/#us [21:28] juliank: Sorry, I was referring to frontend commands. [21:28] Oh [21:28] Figured you might know offhand. [21:29] I don't think there are any other tools [21:30] SHA3 is done by nettle-hash in nettle-bin [21:31] and well by rhash, but that's incredibly slow IIRC [21:31] I guess we'll add a helper to apt shortly [21:31] Like /usr/lib/apt/apt-helper hash-file or something [21:32] It's really not a lot of code, and quite useful :) [22:02] cjwatson, infinity: I'm looking at the devscripts delta, and I wonder whether we really need it. If 'needs-recomends' doesn't pull the recommends that's an autopkgtest bug, not something that should be done in devscripts. [22:02] also it looks rather odd. [22:02] I'd personally say to just sync it now, and see how it goes with the upcoming perl transition [22:12] mapreri: I disagree that it's an autopkgtest bug that it doesn't pull recommends. [22:13] mapreri: But you can sync it if you promise to be responsible for cleaning up any resulting breakage in the next Perl transition, I guess (and if the debhelper dependency is no longer needed either) [22:13] well, if I write 'Restriction: needs-recommends' it means I really want the recommended packages installed, doesn't it? [22:14] cjwatson: I was about to add those packages in devscripts.git in debian, but then realized that to me they look mostly noise/cruft... [22:15] mapreri: I'm not sure it necessarily means that they should be treated as hard dependencies for the purposes of the dependency resolver; I don't remember the details, I just remember it being difficult to reason about [22:15] mapreri: Mostly I don't want us to have to disentangle the whole mess again, so perhaps do me a favour and don't go through the whole Perl stack removing all of the workarounds of that form in one go? [22:16] I have no intentions (nor the time!) to go through all of perl :) [22:16] was just interested in devscripts [22:17] cjwatson: I'll just sync devscripts, and keep an eye on it after the perl transition starts :) [22:17] cjwatson: do you know if ubuntu plans to do it at the same time of debian, or when? [22:39] i'm not sure if needs-recommends was ever implemented properly [22:40] mapreri: don't know [22:41] I assume depends on timing with respect to freezes as usual [23:00] rbasak: what kind of testing did you have in mind and what limitations are there on the environment the tests can be run in? [23:05] mwhudson: is cloud-init now the only thing blocking 3.6 as the default? [23:05] tumbleweed: debconf ping [23:05] doko: yeah, i think so [23:06] doko: i mean there's a bunch of failures in universe still but most of those are in the archive too [23:06] whut https://launchpadlibrarian.net/329584191/buildlog_ubuntu-artful-amd64.pylint_1.7.2-0ubuntu1_BUILDING.txt.gz [23:07] mwhudson: as a work around, would it be possible to use 3.5 explicitly? [23:07] pretty sure that built locally [23:07] oh well [23:07] doko: the patch to cloud-init is known and i think smoser thinks the last upload included it [23:07] I mean, that would mean that we have to keep 3.5 as supported, but it would be a way forward [23:07] so as a cloud-init upload is needed i'd rather see if we can use the right fix... [23:08] afk for 5-10 sorry [23:20] bmw: integration/functional test level, for the entire set of proposed certbot updates for a particular Ubuntu release. I'm interested in catching things that are introduced after your tests for release, so things like packaging issues and differences in dependency versions and so on. [23:21] bmw: I'd like whoever is driving the SRU (hopefully you or someone under your direction) to be able to do this, so any environment available to you really. Both manual and automatic tests are fine with me. [23:21] bmw: the idea is that we will build built binaries for the proposed SRU. This will be opt-in to all users interested in testing. [23:23] bmw: once verified by whatever process we can try to define now, the Ubuntu SRU team will release the packages (exactly the same build) to "automatic updates", rather than opt-in. Anyone with certbot packages installed who grabs "all" updates (apt upgrade, etc) will get them. [23:26] oh that build passed locally because it installed the package from pypi :( [23:31] rbasak: got it [23:32] probably the best thing to run for that is our integration tests against a version of boulder (Let's Encrypt) running locally [23:33] we have instructions for how to do this here: https://certbot.eff.org/docs/contributing.html#integration-testing-with-the-boulder-ca [23:34] but they'll differ a bit when using it with the built packages [23:34] it's also worth noting that our integration tests aren't included in the built packages so the instructions for how to get and run them will be a bit different [23:34] I'm happy to write up something describing how to do this [23:52] bmw: sounds good! If we can define these tests, would you be able to arrange to get them performed per proposed SRU? So for the 0.14.2 SRU, that'll be three times: for 16.04, 16.10 and 17.04. [23:52] (16.10 will be EOL soon) [23:53] bmw: if you could do that please, then we can put the test definition in the formal policy document for Ubuntu SRUs, and then use it as part of the release process. [23:53] bmw: this is all that's left to define really. Apart from that, we're ready :) [23:54] that's exciting :) [23:54] I'll write up how to do it and run them for the three versions of Ubuntu tomorrow [23:55] Thanks! [23:56] bmw: no need to run them yet though. Let's define the process first. For the SRU we'll need them run against the proposed built packages, and we don't have those yet. [23:56] oh that's my misunderstanding [23:57] I thought those were done and reviewed [23:57] The code changes are ready and reviewed. [23:57] And I've built locally and in a PPA etc. [23:57] But the build for publication to the Ubuntu archive is done in a separate environment. I can only push source to that, not binaries. [23:58] ah okay [23:58] I'll probably need help in my instructions for how to install packages from there, but I'll write up the rest [23:59] bmw: https://wiki.ubuntu.com/Testing/EnableProposed - this might be a little out of date. The CLI method should still work.