[00:01] xnox: BRB, moving to France. [00:02] vorlon, ruby2.5 respun; will deal with perl after sleeping. [05:22] -queuebot:#ubuntu-release- New binary: pmdk-convert [amd64] (eoan-proposed/none) [1.5.1-1] (no packageset) [05:29] -queuebot:#ubuntu-release- New binary: pmdk-convert [arm64] (eoan-proposed/none) [1.5.1-1] (no packageset) [07:01] hello, question: freshplayerplugin can be built everywhere, but it Recommends adobe-flashplugin that is an amd64/i386 only package. Can it be built everywhere or should be restricted too? now it is restricted, but the debian package isn't... [07:02] (mostly all the delta of the package can be dropped because of "new upstream release" and "debian merged most changes") [07:20] autopkgtest [05:46:36]: ERROR: testbed failure: sent `copyup /tmp/autopkgtest.M6jL27/build.qOq/src/ /tmp/autopkgtest-work.umz0lxj2/out/tests-tree/', got `timeout', expected `ok...' [07:20] that doesn't look so great in the ppc64el logs [07:20] but why do we need to copy these trees back to the dispatcher anyway? [07:26] .. in order to parse the debian/tests/control from the built tree, sigh [09:11] -queuebot:#ubuntu-release- New source: openjdk-8 (eoan-proposed/primary) [8u212-b01-1ubuntu1] [09:12] -queuebot:#ubuntu-release- New: accepted openjdk-8 [source] (eoan-proposed) [8u212-b01-1ubuntu1] [09:21] removing openjdk-8 wasn't a bright idea, when the Debian removal only requested removal of some binaries [09:24] the Debian removal *requested* that, but process-removals works off what was *actually* removed [09:25] (plus human intervention, but it can be easy to miss) [09:25] can the binaries be restored in eoan? [09:26] or else it's rebootstrapping [09:26] maybe if you'd asked before reuploading [09:26] I can remove that one [09:26] how hard is rebootstrapping? [09:27] if it's not super-difficult it might be easier [09:27] some ppa copies from older releases, then copying a clean binary build to eoan [09:27] ok [09:28] mind you I suppose the worst case of remove and copy is that we might burn the -1ubuntu1 version number [09:28] perhaps we're better off doing that [09:28] no problem to use a new version [09:29] ok, so I suggest removing the version just uploaded to eoan, then copy-package --from ubuntu --from-suite disco --to ubuntu --to-suite eoan-proposed -e 8u212-b01-1 --include-binaries openjdk-8 [09:30] so no ppa bootstrap? [09:30] nope [09:30] ok [09:30] assuming there are no other sources that go with this [09:45] -queuebot:#ubuntu-release- New binary: lvm2 [amd64] (eoan-proposed/main) [2.03.02-2ubuntu1] (core) [09:45] -queuebot:#ubuntu-release- New binary: lvm2 [s390x] (eoan-proposed/main) [2.03.02-2ubuntu1] (core) [09:46] -queuebot:#ubuntu-release- New binary: lvm2 [ppc64el] (eoan-proposed/main) [2.03.02-2ubuntu1] (core) [09:47] -queuebot:#ubuntu-release- New binary: lvm2 [arm64] (eoan-proposed/main) [2.03.02-2ubuntu1] (core) [09:47] -queuebot:#ubuntu-release- New binary: lvm2 [i386] (eoan-proposed/main) [2.03.02-2ubuntu1] (core) [09:47] -queuebot:#ubuntu-release- New binary: lvm2 [armhf] (eoan-proposed/main) [2.03.02-2ubuntu1] (core) [09:51] rbasak, could you please approve console-setup srus? it fixes LP: #520546 that got quite hot recently [09:51] Launchpad bug 520546 in console-setup (Ubuntu Disco) "Alt+KEY incorrectly behaves like Ctrl+Alt+KEY, and/or unwanted VT switch from Alt+Left/Right" [High,In progress] https://launchpad.net/bugs/520546 [09:58] rbalint: just console-setup in Disco? Anywhere else? [10:01] -queuebot:#ubuntu-release- New binary: linux-signed-hwe-edge [ppc64el] (bionic-proposed/main) [5.0.0-13.14~18.04.2] (kernel) [10:02] -queuebot:#ubuntu-release- New sync: openjdk-8 (eoan-proposed/primary) [8u212-b01-1] [10:03] -queuebot:#ubuntu-release- New binary: linux-signed-hwe-edge [amd64] (bionic-proposed/main) [5.0.0-13.14~18.04.2] (kernel) [10:06] -queuebot:#ubuntu-release- New: accepted openjdk-8 [sync] (eoan-proposed) [8u212-b01-1] [10:26] rbasak during your sru vanguard shift today, can you plz approve the qemu upload to xenial [10:27] -queuebot:#ubuntu-release- Unapproved: blender (bionic-proposed/universe) [2.79.b+dfsg0-1 => 2.79.b+dfsg0-1ubuntu1.18.04.1] (ubuntustudio) [10:27] -queuebot:#ubuntu-release- Unapproved: blender (disco-proposed/universe) [2.79.b+dfsg0-6build1 => 2.79.b+dfsg0-6ubuntu1.19.04.1] (ubuntustudio) [10:27] -queuebot:#ubuntu-release- Unapproved: blender (cosmic-proposed/universe) [2.79.b+dfsg0-4 => 2.79.b+dfsg0-4ubuntu1.18.10.1] (ubuntustudio) [10:30] sil2100, rbasak: There's still an update-notifier upload in trusty's unapproved queue from last monday with the final fixes for the (somewhat trusty-specific) ESM messaging. I'd like to at least get this thing completed before the esm transition happens. [10:30] The changes compared to the one we have in proposed are minimal and covered by tests [10:45] rbasak, no, down to bionic and i'm testing xenial right now [10:53] rbalint: am I right in thinking that the workaround/fix is to reboot (or restart X), and that the SRU will only stop it breaking but not unbreak it? [10:53] bdmurray when you get in, your patches for lp #1794292 appear to have caused the reports to errors.ubuntu.com and stopped plymouth phasing, can you look at that and restart the phasing if you think it's ok [10:53] Launchpad bug 1794292 in plymouth (Ubuntu Cosmic) "plymouthd crashed with SIGSEGV in /sbin/plymouthd:11 in ply_renderer_set_handler_for_input_source -> ply_keyboard_stop_watching_for_renderer_input -> ply_keyboard_stop_watching_for_input -> ply_device_manager_deactivate_keyboards -> on_deactivate" [High,Fix released] https://launchpad.net/bugs/1794292 [10:53] rbasak, it won't unbreak it, but prevents further pkg updates breaking it again [10:53] rbalint: if so I might adjust the bug to stop a load of affected people "failing" SRU verification because the update didn't fix it. [10:54] OK, thanks. [10:54] rbasak, i did not think about this potential confusion, thanks! [10:59] -queuebot:#ubuntu-release- Unapproved: accepted console-setup [source] (disco-proposed) [1.178ubuntu12.1] [10:59] -queuebot:#ubuntu-release- Unapproved: accepted console-setup [source] (cosmic-proposed) [1.178ubuntu9.2] [11:00] -queuebot:#ubuntu-release- Unapproved: accepted console-setup [source] (bionic-proposed) [1.178ubuntu2.9] [11:01] rbalint: accepted d/c/b; ping me please when you have tested and uploaded x? [11:06] rbalint: are all the bug tasks for the other source packages Invalid now? [11:10] rbasak, thanks! i believe others are invalid, except for kbd [11:12] rbasak, i'm marking them [11:23] Thanks! [12:45] -queuebot:#ubuntu-release- Unapproved: openvpn (cosmic-proposed/main) [2.4.6-1ubuntu2 => 2.4.6-1ubuntu2.1] (ubuntu-desktop, ubuntu-server) [12:56] Laney: hey! britney's running on snakefruit, right? You have access there? [12:56] sil2100: tak [12:57] Laney: smb noticed that there seem to be no excuses updates for bionic and xenial since last week [12:57] http://people.canonical.com/~ubuntu-archive/proposed-migration/bionic/ <- generated on the 16th [12:57] (same for xenial) [12:58] how intriguing [12:58] * sil2100 looks for logs [12:58] https://people.canonical.com/~ubuntu-archive/proposed-migration/log/bionic/2019-04-24/ [12:58] http://people.canonical.com/~ubuntu-archive/proposed-migration/log/bionic/2019-04-24/12:57:12.log [12:58] Ouch [12:59] -queuebot:#ubuntu-release- Unapproved: openvpn (bionic-proposed/main) [2.4.4-2ubuntu1.1 => 2.4.4-2ubuntu1.2] (ubuntu-server) [12:59] First time I see britney tracebacking ;) [12:59] vot does it mean [13:00] she's not amused [13:00] Laney: ok, I can look into this in detail after our sprint-load meeting, so if you're busy - don't worry about it [13:00] * sil2100 should have checked the logs first, thought that maybe it was just not running at all on snakefruit [13:00] I'm blaming https://bazaar.launchpad.net/~ubuntu-release/britney/britney1-ubuntu/revision/314 [13:01] * sil2100 shrugz [13:01] britney1 [13:01] * sil2100 knows nothing of britney1 so he slowly backs off [13:02] ;) [13:07] infinity: ^- what was that commit about? [13:07] Laney: Made disco happy. I'm puzzling over why it made the others grumpy. :/ [13:08] Guessing you can't have a FauxPackage when there's a nonFauxPackage [13:08] (also, why isn't FauxPackages per suite?) [13:08] Might want per series fauxies [13:14] It's a bit scary, since this means we had no bionic and xenial autopkgtests ran for any of the SRUs in the last week [13:15] That's only scary if the people processing those SRUs ignored that tests were in progress. [13:15] The good news is that today's the '7th day' from the last run, so yeah [13:15] Or, rather, ignored that the packages weren't in excuses at all. [13:16] infinity: the pending-sru report doesn't mention running tests I think, and I doubt anyone from the SRU team looks at the excuses page at all - we rely on the pending report lisitng all the needed info [13:16] Which is why no one noticed it up until today, ugh! [13:17] Ahh well, it's not the end of the world. [13:17] Oh well, probably we need to improve the process to at least check excuses, and maybe add some feedback to the report [13:17] But as said, anyway, luckily only now the aging period from last run has passed so yeah ;) [13:17] I'll just comment that out for now, and I think I can fix fauxpackages to only create an entry if the package doesn't exist. [13:17] So we don't get two [13:17] infinity: ok, thanks o/ [13:20] Cowboyed for now. Will think harder about it tomorrow. [13:20] I think my proposed fix should be vaguely trivialish. [13:21] rbasak sil2100 if either of you could approve the qemu upload for xenial, that would be super duper awesome [13:26] sil2100: do you have an opinion on bug 1804513 please? I'm not sure if the major version bump as documented in the bug is acceptable as-is, because it doesn't seem to look at any behaviour breaks for existing users. Do you read it differently? [13:26] bug 1804513 in mixxx (Ubuntu Cosmic) "Cosmic: Mixxx 2.1.3 is not stable with Qt5" [Critical,In progress] https://launchpad.net/bugs/1804513 [13:27] ddstreet: any reason this needs to be prioritised over old things in the queue please? [13:27] rbasak it works around a qemu crash during normal operation, in use by a customer via the trusty-mitaka UCA repo [13:28] so with trusty going ESM tomorrow, we'd like to get it into X asap so it can make it into trusty-mitaka asap as well; there is a lot of uncertainty around the details of trusty-mitaka in ESM [13:28] waiting in the sru upload queue doesn't help the uncertainty [13:34] Binary files /tmp/tmpzQ26Bn/QfP_vgKhtx/update-notifier-0.154.1ubuntu4/data/__pycache__/package_data_downloader.cpython-37.pyc and /tmp/tmpzQ26Bn/cHQOFmTiDK/update-notifier-0.154.1ubuntu5/data/__pycache__/package_data_downloader.cpython-37.pyc differ [13:34] juliank: ^ should that be in the source package at all? [13:35] tests/__pycache__/apt_check.cpython-37.pyc also [13:35] rbasak: not really, which is why it went away, and hence differs [13:35] Oh. OK :) [13:36] diff stupid [13:36] * rbasak doesn't usually use the Launchpad diffs, but git-ubuntu is broken :-/ [13:36] It should show "file deleted" imo [13:42] -queuebot:#ubuntu-release- Unapproved: accepted update-notifier [source] (trusty-proposed) [0.154.1ubuntu5] [13:56] ddstreet: thank you for the well written up bug - that saved me from asking a bunch of questions :) [13:56] ddstreet: have you had a code review from anyone on your patch? [13:57] infinity, hm, I am not sure that cowboy is too successful [13:57] cpaelzer has https://code.launchpad.net/~ddstreet/ubuntu/+source/qemu/+git/qemu/+merge/366392 [13:57] Perfect! [13:58] Sorry I missed the link from the bug [13:59] smb: https://people.canonical.com/~ubuntu-archive/proposed-migration/log/bionic/2019-04-24/13:56:21.log <- looks like it's working [14:00] Laney, ok, when I looked the last log was the one before [14:00] -queuebot:#ubuntu-release- Unapproved: accepted qemu [source] (xenial-proposed) [1:2.5+dfsg-5ubuntu10.37] [14:11] rbasak: let me take a look [14:12] ddstreet: I see Robie is already reviewing it, but in case he's done with his shift, I can pick it up then ;) [14:12] I accepted qemu/xenial ^ [14:13] sil2100: consider me done with my shift now please, if you want to take any. I'd like to get git-ubuntu fixed. It makes my SRU shifts easier when it's working :) [14:27] \o/ [14:27] rbasak: thanks for all the reviews today! [14:38] -queuebot:#ubuntu-release- Unapproved: freelan (disco-proposed/universe) [2.0-8ubuntu2 => 2.0-8ubuntu3] (no packageset) [14:49] sil2100, please reject freelan above..... wrong suite, it's meant for eoan [14:50] ACK [14:51] Done [14:52] -queuebot:#ubuntu-release- Unapproved: rejected freelan [source] (disco-proposed) [2.0-8ubuntu3] [15:14] doko: should we get openjdk-8 back into disco? I am preparing the security update, so this would be the right time to do it [15:15] -queuebot:#ubuntu-release- Unapproved: python-glance-store (cosmic-proposed/main) [0.26.1-0ubuntu2 => 0.26.1-0ubuntu2.1] (openstack, ubuntu-server) [15:16] -queuebot:#ubuntu-release- Unapproved: python-glance-store (disco-proposed/main) [0.28.0-0ubuntu1 => 0.28.0-0ubuntu1.1] (openstack, ubuntu-server) [15:19] sil2100: do you have a moment? [15:29] -queuebot:#ubuntu-release- Unapproved: knockd (disco-proposed/universe) [0.7-1ubuntu2 => 0.7-1ubuntu2.1] (no packageset) [15:32] -queuebot:#ubuntu-release- Unapproved: knockd (cosmic-proposed/universe) [0.7-1ubuntu1.18.10.1 => 0.7-1ubuntu1.18.10.2] (no packageset) [15:32] bdmurray: what's up? [15:32] Laney: don't know if you saw my late-night blathering, but in digging into the ppc64el queue I noticed two things; 1) data transfer between the ppc64el runners and the autopkgtest runners looks like it might be slower than for other archs (why?), and 2) this is noticeable because we are transferring multiple GB of built source trees from runner to host just to inspect the final debian/tests/ [15:32] directory. do you know any reason we shouldn't make runner/autopkgtest:process_actions() pull down *just* the debian/tests directory? [15:34] sil2100: the phased update of plymouth has stopped because of some crashes and I believe they are legitimate but it crashes a lot less now than it used too... [15:34] vorlon: no idea why it would be slower than anything else in the same region [15:34] as for (2), not offhand, but please discuss with elbrus [15:35] -queuebot:#ubuntu-release- Unapproved: knockd (bionic-proposed/universe) [0.7-1ubuntu1.18.04.1 => 0.7-1ubuntu1.18.04.2] (no packageset) [15:36] sil2100: the crash we are fixing happens 1k times per day while the new crashes happen a couple of times per day (but to be fair the new plymouth is only at 10% phasing). But it's also fixed in disco with similarly low counts. [15:37] sil2100: so I'm inclined to let it phase [15:46] s/let/make/ [15:48] bdmurray: if it's the same crash, I'd say let's make it moving [15:54] sil2100: they are new crashes https://errors.ubuntu.com/problem/bec9336475f855ea20928e187ea65b281531514a, https://errors.ubuntu.com/problem/197cb74e56a0583e15b57286f3dae38236225acf, https://errors.ubuntu.com/problem/3106debb8d0e370fe4f44e3c76ed5ca230ea0eb7 [15:58] vorlon: still working on the python things or can I borrow you for a NEW review again? [15:58] s/python/other SRU/ [16:00] or maybe even sil2100? if either of you have a few minutes that is. [16:05] teward: eek, would love to, but I still have a few things to finish before my EOD sadly :< [16:06] teward: can you poke me tomorrow in case Steve or others have no time? [16:07] sil2100: yep, sure. I poked vorlon yesterday but vorlon was inundated wtih the OpenSSL SRU and related SRUs :p [16:08] I'm downloading now for inspection [16:09] bdmurray: hm, those look similar in taste to the ones fixed by the SRU, at least that's the feeling I get [16:12] sil2100: so where does that leave us? [16:13] ah cool thanks vorlon [16:21] Laney: ok I found the answer to my question, we need to copy the entire tree back down because we launch a separate testbed for the tests than for the package build in the Restrictions: needs-build case [16:22] bdmurray: I'd say let's override it - it feels to me like an improvement from the previous state, and I don't see new manual bug reports being filled for issues like this [16:23] bdmurray: let's let it phase and monitor if the number doesn't grow to worrying levels [16:23] -queuebot:#ubuntu-release- New: accepted nginx [amd64] (eoan-proposed) [1.15.12-0ubuntu1] [16:23] -queuebot:#ubuntu-release- New: accepted nginx [armhf] (eoan-proposed) [1.15.12-0ubuntu1] [16:23] -queuebot:#ubuntu-release- New: accepted nginx [ppc64el] (eoan-proposed) [1.15.12-0ubuntu1] [16:23] -queuebot:#ubuntu-release- New: accepted nginx [arm64] (eoan-proposed) [1.15.12-0ubuntu1] [16:23] -queuebot:#ubuntu-release- New: accepted nginx [s390x] (eoan-proposed) [1.15.12-0ubuntu1] [16:23] -queuebot:#ubuntu-release- New: accepted nginx [i386] (eoan-proposed) [1.15.12-0ubuntu1] [16:24] vorlon: ah right, but this happens in other cases where it might not be needed too? [16:24] OTOH the most problematic tests (KDE) are all build-needed [16:24] Laney: no, the cases where I'm currently seeing the timeouts are exactly the KDE packages [16:24] if build is not needed, it doesn't need to stream the tree around an extra time [16:24] vorlon: i'm assuming you did the acceptance on the new NGINX plugin. Thank you kindly! [16:25] and if it wasn't you thanks to whomever did. [16:25] anyway, going to do some rough network benchmarking now [16:25] teward: yeah that was me, y/w [16:25] if there's a network problem it'd be good to bring that up with IS folks [16:25] yep [16:26] there are multiple copies of openvswitch/neutron/... - it's possible those getting swamped could cause problems like this [16:26] and probably different physical hardware too [16:27] but also, general :( at KDE being at the middle of a problem again [16:50] sil2100: will do, thanks for the input [17:36] -queuebot:#ubuntu-release- Unapproved: accepted kpatch [source] (bionic-proposed) [0.5.0-0ubuntu1.1] [17:39] -queuebot:#ubuntu-release- Unapproved: accepted dm-writeboost [source] (bionic-proposed) [2.2.8-1ubuntu3~18.04.2] [17:42] -queuebot:#ubuntu-release- Unapproved: accepted xtables-addons [source] (bionic-proposed) [3.0-0.1ubuntu2] [17:56] Odd_Bloke: hey, curious on the timeframe of ubuntu-daily:eoan/amd64? I'm told that CPC handles those and currently the docker.io autopkgtest is failing because 'lxc launch ubuntu-daily:eoan/amd64 docker' fails [17:56] Odd_Bloke: actually, not just amd64, whatever docker.io needs [17:57] jdstrand: I'm no longer on CPC, so that's rcj and fginther's problem. :) [17:58] Odd_Bloke: ah, thanks for the point and I hope you are enjoying your new role :) [17:59] :) [17:59] pointer* [18:01] Laney: so while I work through the question of whether there are network bottlenecks, what should we do about the timeouts themselves? Do you know if we ever use that timeout to catch dead runners, or is it just getting in our way by being too low? [18:02] and I still have trouble diagnosing these things with any accuracy, but it looks to me like hitting the timeout may be leaving the runners in some kind of limbo state besides [18:03] jdstrand, getting eoan images built and published is our current priority. I do expect these before end of week [18:03] fginther: ack, thanks! [18:20] -queuebot:#ubuntu-release- New binary: linux-signed-oem-osp1 [amd64] (bionic-proposed/universe) [5.0.0-1004.5] (no packageset) [18:48] -queuebot:#ubuntu-release- Unapproved: openvpn (xenial-proposed/main) [2.3.10-1ubuntu2.1 => 2.3.10-1ubuntu2.2] (ubuntu-server) [20:15] -queuebot:#ubuntu-release- Unapproved: metaphlan2 (bionic-proposed/universe) [2.7.5-1 => 2.7.5-1ubuntu1] (no packageset) [20:22] was there a temporary network issue with the autopkgtests? [20:36] teward: what specifically are you seeing? [20:36] vorlon: "Failed to resolve ftpmaster.internal" errors in some autopkgtests triggered by the nginx upload [20:36] the autopkgtest infra is currently definitely network constrained because of all the KDE packages' built trees that are being streamed back and forth and timing out [20:36] teward: for which archs? [20:37] armhf [20:37] and only 3 of the tests [20:37] yeah [20:37] that's an ongoing issue that we thought was fixed once then recurred [20:37] ack [20:39] vorlon: so i assume then the reruns I just queued for the same tests that regressed may or may not succeed on next-run [20:39] do we just hand-wave those or do we wait for the infra to settle? [22:09] vorlon: just as an FYI, looks like my rerunning those tests reran them with the original network issue fixed, and the tests passed as they usually did [22:09] so guess I don't have to worry then :) [22:16] (it's still running the other autopkgtests, but the regressing/failing ones there I wanted to at least get handled heh) [23:39] teward: it's ongoing and intermittent, yeah