=== s8321414_ is now known as s8321414 [05:12] -queuebot:#ubuntu-release- New binary: cura [amd64] (bionic-proposed/universe) [3.0.3-2] (no packageset) [05:12] -queuebot:#ubuntu-release- New binary: mess-desktop-entries [amd64] (bionic-proposed/none) [0.2-3] (no packageset) [05:12] -queuebot:#ubuntu-release- New binary: getmail [amd64] (bionic-proposed/none) [5.4-1] (no packageset) [05:13] -queuebot:#ubuntu-release- New binary: writerperfect [amd64] (bionic-proposed/universe) [0.9.6-1] (no packageset) [05:14] -queuebot:#ubuntu-release- New binary: writerperfect [ppc64el] (bionic-proposed/universe) [0.9.6-1] (no packageset) [05:15] -queuebot:#ubuntu-release- New binary: writerperfect [s390x] (bionic-proposed/universe) [0.9.6-1] (no packageset) [05:16] -queuebot:#ubuntu-release- New binary: writerperfect [i386] (bionic-proposed/universe) [0.9.6-1] (no packageset) [05:19] -queuebot:#ubuntu-release- New binary: writerperfect [arm64] (bionic-proposed/universe) [0.9.6-1] (no packageset) [05:20] -queuebot:#ubuntu-release- New binary: writerperfect [armhf] (bionic-proposed/universe) [0.9.6-1] (no packageset) === NCommander is now known as KD2JRT [07:10] -queuebot:#ubuntu-release- Unapproved: cockpit (artful-backports/universe) [157-1~ubuntu17.10.1 => 158-1~ubuntu17.10.1] (no packageset) [07:12] -queuebot:#ubuntu-release- Unapproved: cockpit (zesty-backports/universe) [157-1~ubuntu17.04.1 => 158-1~ubuntu17.04.1] (no packageset) [07:16] -queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (artful-backports) [158-1~ubuntu17.10.1] [07:17] -queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (zesty-backports) [158-1~ubuntu17.04.1] [07:17] -queuebot:#ubuntu-release- Unapproved: cockpit (xenial-backports/universe) [157-1~ubuntu16.04.1 => 158-1~ubuntu16.04.1] (no packageset) [07:37] -queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (xenial-backports) [158-1~ubuntu16.04.1] === KD2JRT is now known as NCommander [09:13] -queuebot:#ubuntu-release- Unapproved: livecd-rootfs (xenial-proposed/main) [2.408.25 => 2.408.26] (desktop-core) [10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [amd64] (bionic-proposed) [0.9.6-1] [10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [armhf] (bionic-proposed) [0.9.6-1] [10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [ppc64el] (bionic-proposed) [0.9.6-1] [10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [arm64] (bionic-proposed) [0.9.6-1] [10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [s390x] (bionic-proposed) [0.9.6-1] [10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [i386] (bionic-proposed) [0.9.6-1] [10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [amd64] (bionic-proposed) [2.2.0~dfsg1-3] [10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [armhf] (bionic-proposed) [2.2.0~dfsg1-3] [10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [amd64] (bionic-proposed) [2.2.2~dfsg1-1] [10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [armhf] (bionic-proposed) [2.2.2~dfsg1-1] [10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [arm64] (bionic-proposed) [2.2.0~dfsg1-3] [10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [arm64] (bionic-proposed) [2.2.2~dfsg1-1] [10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [i386] (bionic-proposed) [2.2.0~dfsg1-3] [10:26] -queuebot:#ubuntu-release- New: accepted qupzilla [i386] (bionic-proposed) [2.2.2~dfsg1-1] [10:26] -queuebot:#ubuntu-release- New: accepted getmail [amd64] (bionic-proposed) [5.4-1] [10:26] -queuebot:#ubuntu-release- New: accepted mess-desktop-entries [amd64] (bionic-proposed) [0.2-3] [10:26] -queuebot:#ubuntu-release- New: accepted cura [amd64] (bionic-proposed) [3.0.3-2] [10:27] -queuebot:#ubuntu-release- New: accepted haskell-store [amd64] (bionic-proposed) [0.4.3.2-1] [10:27] -queuebot:#ubuntu-release- New: accepted haskell-store [ppc64el] (bionic-proposed) [0.4.3.2-1] [10:55] looking to get sphinx migrated. There are two failing autopkg tests [11:03] - svgpp/1.2.3+dfsg1-3/s390x unrelated to sphinx [11:07] - monkeysign/2.2.3/armhf - unrelated to sphinx, error connecting key server? [11:07] both tests only succeeded once, so please consider overriding or resetting these tests [11:13] -queuebot:#ubuntu-release- New binary: breeze-gtk [s390x] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu) [11:13] -queuebot:#ubuntu-release- New binary: breeze-gtk [ppc64el] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu) [11:14] -queuebot:#ubuntu-release- New binary: breeze-gtk [i386] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu) [11:15] -queuebot:#ubuntu-release- New binary: breeze-gtk [armhf] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu) [11:16] doko, hi .. [11:17] -queuebot:#ubuntu-release- New binary: breeze-gtk [amd64] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu) [11:18] doko, as those are the only two regressions on sphinx it migth be approriate to skiptest sphinx [11:18] doko, so one appears to be a keyserver test which is failing, and the other the runner running out of memory [11:18] doko, so that seems fine to my eye [11:20] doko, hinted [11:23] -queuebot:#ubuntu-release- New binary: breeze-gtk [arm64] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu) [11:25] ta [11:26] so the next ones would be python3-defaults and python-setuptools ... that looks more ugly [12:27] -queuebot:#ubuntu-release- New binary: python-octaviaclient [amd64] (bionic-proposed/universe) [1.1.0-1] (no packageset) [12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [amd64] (bionic-proposed) [5.11.4-0ubuntu2] [12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [armhf] (bionic-proposed) [5.11.4-0ubuntu2] [12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [ppc64el] (bionic-proposed) [5.11.4-0ubuntu2] [12:37] -queuebot:#ubuntu-release- New: accepted python-octaviaclient [amd64] (bionic-proposed) [1.1.0-1] [12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [arm64] (bionic-proposed) [5.11.4-0ubuntu2] [12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [s390x] (bionic-proposed) [5.11.4-0ubuntu2] [12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [i386] (bionic-proposed) [5.11.4-0ubuntu2] [14:09] Qt transition incoming, shouldn't be as big as usual [14:09] (much less ABI bumps) [14:09] Ping me with concerns, LocutusOfBorg just pressed the button ;P [14:13] o/ sil2100, could you please release 'sysstat' in -updates for A/Z/X when you have a moment ? Thanks in advance [14:13] dgadomski, ^ [14:26] Qt transition just before Xmas! [14:26] * acheronuk hides until holidays are over [14:30] slashd: ACK, will be doing my SRU shift in a moment [14:35] sil2100, sure no rush thanks a lot [14:36] sil2100, it should be my last request for this year ;) [14:59] somewhat a test machine seems stuck... [14:59] "autopkgtest [14:08:02]: rebooting testbed after setup commands that affected boot" [14:59] I was rekicking sumo, and since one hour is stuck, the other retry went fine in a couple of minutes [15:13] LocutusOfBorg, it will timeout and be shot eventually [15:19] I hope, thanks! however I'm wondering why did it happen :) [15:27] LocutusOfBorg, it is a cloud, there is a lot of things that can go wrong when attempting a robot driven reboot [16:10] can someone on the sru team please publish the gnome-software SRU? The last of the bugs have been marked as verified. [16:16] I'm a bit behind with my SRU shift but I'll get to it in some moments [17:04] sil2100, thanks! [17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [amd64] (bionic-proposed/none) [3.1.0-2] (no packageset) [17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [i386] (bionic-proposed/none) [3.1.0-2] (no packageset) [17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [s390x] (bionic-proposed/none) [3.1.0-2] (no packageset) [17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [arm64] (bionic-proposed/none) [3.1.0-2] (no packageset) [17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [ppc64el] (bionic-proposed/none) [3.1.0-2] (no packageset) [17:12] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [armhf] (bionic-proposed/none) [3.1.0-2] (no packageset) [17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [amd64] (bionic-proposed) [3.1.0-2] [17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [armhf] (bionic-proposed) [3.1.0-2] [17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [ppc64el] (bionic-proposed) [3.1.0-2] [17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [arm64] (bionic-proposed) [3.1.0-2] [17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [s390x] (bionic-proposed) [3.1.0-2] [17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [i386] (bionic-proposed) [3.1.0-2] [18:06] doko: regarding the python* SRUs for zesty - could you take a look at the autopkgtest failures associated with the uploads and check if any are related to the changes that are released? Whenever an upload has this many ADT failures SRU members will not do the uploaders work of checking if they're related or not [18:08] sil2100: pointer? [18:11] -queuebot:#ubuntu-release- Unapproved: accepted fwupdate-signed [source] (zesty-proposed) [1.13.1] [18:12] doko: https://people.canonical.com/~ubuntu-archive/pending-sru.html#zesty and find the two python uploads [18:12] There's a wall of autopkgtest regressions, the uploader needs to make sure those are not related and/or fix [18:14] sil2100: the -defaults upload can't cause this. please could you point me to the autopkg test failures for the -defaults upload for the release? [18:14] we should compare results with this one [18:15] sil2100: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/armhf/a/automake-1.15/20171128_000327_2fe30@/log.gz [18:15] this is a timeout. did you really check these? [18:15] doko: as I said, I did *not* check *any* of these, it's the responsibility of the *uploader* [18:16] in this case you should reject. I'm not responsible to check for regressions caused by changes in the autopkg test infrastructure. [18:17] doko: if there's too many failures we check none, we expect the uploader to follow through the upload till the end - if there are ADT regressions, the uploader needs to let us know (in the bug or anywhere) that these are not regressions or not issues caused by the upload [18:17] sil2100: or tell me where this is documented [18:17] We can't be expected to look through every failure made by every upload in every series [18:17] sil2100: sure, but you can't expect the same from me, or do you? [18:17] doko: then who should be looking into those then? [18:18] sil2100: are autopkg tests retried for stable releases? [18:18] doko: when you upload to the development series, who checks if the ADT failures are regressions or not? [18:18] sil2100: autopkg tests are given back automatically from time to time [18:19] they are not [18:20] doko: if they fail you can retry them, same as for devel, this doesn't happen automatically [18:21] sil2100: is there a list of autopkg tests which were waived for xenial and zesty? [18:22] sil2100: what should I do about https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/g/gfs2-utils/20171128_132030_67a2d@/log.gz ? [18:22] Removing autopkgtest-satdep (0) ... [18:22] Exit request sent. [18:22] Creating nova instance adt-zesty-s390x-gfs2-utils-20171128-131747 from image auto-sync/ubuntu-zesty-17.04-s390x-server-20171031-disk1.img (UUID a9bfdca9-679f-43e3-8bd8-e6d36ba30522)... [18:22] autopkgtest [13:20:29]: ERROR: erroneous package: Test dependencies are unsatisfiable. A common reason is that your testbed is out of date with respect to the archive, and you need to use a current testbed or run apt-get update or use -U. [18:22] blame: gfs2-utils [18:22] badpkg: Test dependencies are unsatisfiable. A common reason is that your testbed is out of date with respect to the archive, and you need to use a current testbed or run apt-get update or use -U. [18:22] that's autopkg test infrastructure [18:23] should I repeat that for every package and tell you that this is unrelated? [18:23] or, dlm-controld doesn't exist for s390x [18:23] well, then it's not a regression, is it? [18:23] s390x tests were somewhat recently moved to vms (from containers), so some tests that were before not run are run now [18:24] I had a similar problem with my iproute2 upload [18:24] sure, and the work is blamed on the uploader [18:24] well, the uploader has to check [18:24] once determined where the problem is, [18:24] and I can't check which ones were waived before [18:25] it's not a case where the uploader introduced the problem [18:25] but he is in a position to verify that [18:25] this doesn't scale [18:25] without giving any reference and which uploads / tests were already waived [18:27] http://bugs.debian.org/583767 do we still need the udeb for libxml2? [18:27] Debian bug 583767 in libxml2 "Add udeb" [Wishlist,Fixed] [18:28] doko: usually what I do with my uploads I check if the given test was failing/passing before in the autopkgtest history, for instance: http://autopkgtest.ubuntu.com/packages/p/python-ruffus/zesty/s390x etc. [18:29] doko: if I see it failed for the first time, I would investigate to see what's up - either try a re-run or try running the test without the new package as a trigger, to see if it's broken by current release packages or by the new upload [18:29] sil2100: this is a timeout [18:30] sil2100: please could you restore the infrastructure to the state when the test was successfully run, and then retry? [18:30] no, you probably cannot [18:30] doko: we can re-run it without a new package trigger which will test it against what's in the -updates pocket [18:30] but please show me one failing test which is not a direct autopkg test issue or a timeout error [18:31] doko: but anyway, if a test is obviously a timeout then we need to be informed about that as it's the uploaders responsibility to check the autopkgtest results [18:31] sil2100: what package should get bug reports for the autopkg test infrastructure? [18:35] doko: you can file a bug against autopkgtests but what the SRU team needs is: if some tests are unrelated infra issues, write down in the SRU bug which failures have been assessed to being infra-related or not [18:35] It's of course best if the tests are simply re-run and passing to make sure there are no regressions [18:35] sil2100: unless you tell me one or two packages triggered by python-defaults which are not timeouts or dependency issues I will not do that [18:36] doko: ok, then I will not release your package as it's your responsibility to look after the autopkgtests triggered by your upload [18:36] Maybe rbasak or slangasek have a different take on this [18:37] slangasek, Laney, infinity, rbasak: I don't think that the uploader should be responsible to figure out these issues, these are infrastructure issues. please feel free to raise these with the appropriate teams [18:38] sil2100: I'll look at the python2.7 caused autopkg test failures, but please could you give me a list of waived autopkg test failures, so I don't have to investigate duplicates? [18:38] doko: but seeing that the python packages are verified and in -proposed since 20 days and not released by anyone yet, I think others might have jumped over the package as well because of this [18:39] Ok, I guess I can help out in scanning those [18:40] sil2100: again, why should I be responsible for infrastructure issues? so the first problem is that a) nobody is aware of these, b) nobody reports bugs on these c) the SRU process is blocked on these [18:40] doko: in devel, are you also not responsible for such things? I mean, if in bionic a test fails due to infra, someone has to re-run it right? [18:41] doko: and I think then it's the uploader's responsibility to either hint it in or re-run it [18:41] doko: yes, other core-devs do that for people, but we send e-mails for packages stuck in -proposed to the uploaders for a reason [18:42] sil2100: does hinting really help for infrastructure issues? no [18:42] doko: but tell me [18:42] doko: what are you doing when your package is stuck in bionic [18:43] doko: or differently, if you upload to bionic and it gets stuck on infra, what happens? [18:43] doko: who re-runs it? Who hints it in? Who talks to the infrastructure people? [18:43] doko: it's usually the uploader [18:43] It should be the uploader [18:43] which is the wrong person [18:43] Then who does it? [18:44] *it* *does* *not* *scale* [18:44] If not the uploader, who is supposed to scan through your upload's failed autopkgtests, assess if they're infra-related and re-run? [18:44] But tell me who does it for bionic? [18:44] having infrastructure people go through all failure logs and work out which ones are their responsibility scales even less well, in general [18:44] sil2100: please start triggering autopkg tests on debhelper, and fix these issues, then I will fix mine [18:45] it probably also doesn't help when problems are misattributed to infrastructure, as above ... [18:46] maybe it doesn't help, but I'm tired to clean up things accumulated during a long time, and then once triggered [18:50] cjwatson: am I wrong calling the gfs2-utils one an infrastructure issue? [18:59] doko: I haven't looked particularly extensively, but unsatisfiable dependencies aren't normally an infrastructure bug; that claim would need special evidence to support it, I think [19:00] (the fact that it's tagged as a regression when the individual test in question was AIUI not previously run might be an infrastructure bug; it sounds like reverting the infrastructure would be unequivocally the wrong response, though ...) [19:03] cjwatson: sure, that doesn't make sense. but confronting me with a list of failures which simply can't be triggered by that change doesn't make sense either [19:03] looked now at the ones triggered by python2.7. half of it triggered by "not-python2.7" issues, looking at the other ones [19:04] I'm not sure it's possible for infrastructure to reasonably tell the difference between "regression induced by infrastructure change that caused new tests to become runnable" and "new test is genuinely busted and may indicate a package bug" [19:04] other than by punting it for human investigation [19:05] ok, so who is supposed to do that? again, I don't think it should be the uploader of a package triggering some hundred autopkg tests [19:06] sil2100: how do you treat "acceptable" autopkg test failures, and how do you mark them? [19:06] of the obvious options, the uploader is surely the option that scales best, unless we can work out how to farm that out to the maintainers of the packages whose tests are failing (bearing in mind that those will often be packages unchanged in Ubuntu) ... [19:06] my only point is really that having infra maintainers do it scales even worse [19:07] sure, but then I think we should just do a no-change debhelper upload to the release pockets, and see what autopkg test failures are triggered [19:07] and use that as a reference what is expected to fail [19:08] who takes care of inrastructure bugs when a package is auto sync'd? [19:08] it usually winds up being whoever gets blocked by the failures, or people chipping off bits and pieces of the problem as they have time ... [19:09] doko: known badtests are marked via britney hints, either lp:~ubuntu-release/britney/hints-ubuntu/ for devel or lp:~ubuntu-sru/britney/hints-ubuntu-$release for srus. [19:09] see r-cran-future and r-cran-openssl - tests pass locally and on debian infra [19:09] and obviously if it's actually infrastructure bugs then hopefully somebody consolidates the pile of logs into a form that can be reported to infra maintainers [19:10] bear in mind that "passes on Debian and fails on Ubuntu" is not necessarily quite enough to prove an infra bug though [19:10] it's doubtless a bug somewhere, but could e.g. depend on the virtualisation layer, or be a race, or be unwarranted sensitivity to details of the CPU, or ... [19:10] cjwatson: fair enough, but where to go from here? [19:11] ginggs: r-cran-openssl, clearly trying to [19:11] ginggs: make an outbound internet connection [19:11] I don't have specific advice; when I've had this sort of problem I've usually either worked harder to reproduce the exact way the Ubuntu test infrastructure is set up locally, or added extra debugging to the package [19:11] sure, but the recent uninstallation issues in bionic were confirmed to exist, and no working solution was found until the affected packages migrated to the release pocket (using manual give backs for all affected tests) [19:11] slangasek: i thought that was allowed in autopkgtests (but not during build) [19:12] ginggs: via proxy only, which these tests are clearly not picking up correctly from the environment [19:13] slangasek: this is a typical issue I see: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/a/alembic/20171204_202448_988c2@/log.gz [19:13] how to address it? [19:14] another one https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/p/python-cinderclient/20171204_204039_d09ea@/log.gz [19:15] doko: you say that's typical? that one very much looks like an infrastructure bug that should just be re-tried (I'm clicking the retry button now) [19:16] s/re-tried/chased down/? doesn't look like a natural race symptom [19:16] the first one was a testbed failure of some description [19:16] well, the second also [19:16] right, but it's pretty scary non-determinism? [19:17] perhaps that's related to having failed to generate autopkgtest images that have these packages pre-removed [19:20] when we were having problems last week due to all the images being gone from bos02, Laney mentioned that autopkgtests pick the most recent image by date regardless of whether it's a base cloud image or an adt image; which seems confusing and wrong to me [19:25] slangasek: I was looking at https://people.canonical.com/~ubuntu-archive/pending-sru.html#zesty for the ones triggered by python-defaults [19:26] sil2100: these 20 days are not accurate. you need to count the time starting with the last autopkg test run [19:28] doko: I think http://people.canonical.com/~ubuntu-archive/proposed-migration/zesty/update_excuses.html#python-defaults provides a better view on this [19:30] slangasek: can we automatically retry autopkg tests with an "unknown" version number? [19:33] doko: seems like something we might want to report + batch; I wouldn't want to do it entirely without oversight because sometimes those failures represent some problem that is knocking down the test runners, so auto-retrying is going to starve the rest of the queue [19:34] at the moment, the problem I'm facing with report+batch is that the cron emails are successfully delivered to Laney but don't make it to me :P [19:34] ouch [19:38] the bzr failures are quite odd, and need confirming whether this has mysteriously regressed in -updates or if it's actually triggered somehow by the python-defaults SRU [19:39] (triggered) === kmously is now known as kmously-afk === ogasawara is now known as Guest76312 [21:03] It is wrong, that's why I had filed a bug for it - and outlined how I thought it could be solved in the bug. [21:03] * Laney isn't here lalal [21:03] :)