[05:12] -queuebot:#ubuntu-release- New binary: cura [amd64] (bionic-proposed/universe) [3.0.3-2] (no packageset)
[05:12] -queuebot:#ubuntu-release- New binary: mess-desktop-entries [amd64] (bionic-proposed/none) [0.2-3] (no packageset)
[05:12] -queuebot:#ubuntu-release- New binary: getmail [amd64] (bionic-proposed/none) [5.4-1] (no packageset)
[05:13] -queuebot:#ubuntu-release- New binary: writerperfect [amd64] (bionic-proposed/universe) [0.9.6-1] (no packageset)
[05:14] -queuebot:#ubuntu-release- New binary: writerperfect [ppc64el] (bionic-proposed/universe) [0.9.6-1] (no packageset)
[05:15] -queuebot:#ubuntu-release- New binary: writerperfect [s390x] (bionic-proposed/universe) [0.9.6-1] (no packageset)
[05:16] -queuebot:#ubuntu-release- New binary: writerperfect [i386] (bionic-proposed/universe) [0.9.6-1] (no packageset)
[05:19] -queuebot:#ubuntu-release- New binary: writerperfect [arm64] (bionic-proposed/universe) [0.9.6-1] (no packageset)
[05:20] -queuebot:#ubuntu-release- New binary: writerperfect [armhf] (bionic-proposed/universe) [0.9.6-1] (no packageset)
[07:10] -queuebot:#ubuntu-release- Unapproved: cockpit (artful-backports/universe) [157-1~ubuntu17.10.1 => 158-1~ubuntu17.10.1] (no packageset)
[07:12] -queuebot:#ubuntu-release- Unapproved: cockpit (zesty-backports/universe) [157-1~ubuntu17.04.1 => 158-1~ubuntu17.04.1] (no packageset)
[07:16] -queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (artful-backports) [158-1~ubuntu17.10.1]
[07:17] -queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (zesty-backports) [158-1~ubuntu17.04.1]
[07:17] -queuebot:#ubuntu-release- Unapproved: cockpit (xenial-backports/universe) [157-1~ubuntu16.04.1 => 158-1~ubuntu16.04.1] (no packageset)
[07:37] -queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (xenial-backports) [158-1~ubuntu16.04.1]
[09:13] -queuebot:#ubuntu-release- Unapproved: livecd-rootfs (xenial-proposed/main) [2.408.25 => 2.408.26] (desktop-core)
[10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [amd64] (bionic-proposed) [0.9.6-1]
[10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [armhf] (bionic-proposed) [0.9.6-1]
[10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [ppc64el] (bionic-proposed) [0.9.6-1]
[10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [arm64] (bionic-proposed) [0.9.6-1]
[10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [s390x] (bionic-proposed) [0.9.6-1]
[10:25] -queuebot:#ubuntu-release- New: accepted writerperfect [i386] (bionic-proposed) [0.9.6-1]
[10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [amd64] (bionic-proposed) [2.2.0~dfsg1-3]
[10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [armhf] (bionic-proposed) [2.2.0~dfsg1-3]
[10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [amd64] (bionic-proposed) [2.2.2~dfsg1-1]
[10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [armhf] (bionic-proposed) [2.2.2~dfsg1-1]
[10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [arm64] (bionic-proposed) [2.2.0~dfsg1-3]
[10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [arm64] (bionic-proposed) [2.2.2~dfsg1-1]
[10:25] -queuebot:#ubuntu-release- New: accepted qupzilla [i386] (bionic-proposed) [2.2.0~dfsg1-3]
[10:26] -queuebot:#ubuntu-release- New: accepted qupzilla [i386] (bionic-proposed) [2.2.2~dfsg1-1]
[10:26] -queuebot:#ubuntu-release- New: accepted getmail [amd64] (bionic-proposed) [5.4-1]
[10:26] -queuebot:#ubuntu-release- New: accepted mess-desktop-entries [amd64] (bionic-proposed) [0.2-3]
[10:26] -queuebot:#ubuntu-release- New: accepted cura [amd64] (bionic-proposed) [3.0.3-2]
[10:27] -queuebot:#ubuntu-release- New: accepted haskell-store [amd64] (bionic-proposed) [0.4.3.2-1]
[10:27] -queuebot:#ubuntu-release- New: accepted haskell-store [ppc64el] (bionic-proposed) [0.4.3.2-1]
[10:55] <doko> looking to get sphinx migrated. There are two failing autopkg tests
[11:03] <doko>  - svgpp/1.2.3+dfsg1-3/s390x unrelated to sphinx
[11:07] <doko>  -  monkeysign/2.2.3/armhf  - unrelated to sphinx, error connecting key server?
[11:07] <doko> both tests only succeeded once, so please consider overriding or resetting these tests
[11:13] -queuebot:#ubuntu-release- New binary: breeze-gtk [s390x] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)
[11:13] -queuebot:#ubuntu-release- New binary: breeze-gtk [ppc64el] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)
[11:14] -queuebot:#ubuntu-release- New binary: breeze-gtk [i386] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)
[11:15] -queuebot:#ubuntu-release- New binary: breeze-gtk [armhf] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)
[11:16] <apw> doko, hi ..
[11:17] -queuebot:#ubuntu-release- New binary: breeze-gtk [amd64] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)
[11:18] <apw> doko, as those are the only two regressions on sphinx it migth be approriate to skiptest sphinx
[11:18] <apw> doko, so one appears to be a keyserver test which is failing, and the other the runner running out of memory
[11:18] <apw> doko, so that seems fine to my eye
[11:20] <apw> doko, hinted
[11:23] -queuebot:#ubuntu-release- New binary: breeze-gtk [arm64] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)
[11:25] <doko> ta
[11:26] <doko> so the next ones would be python3-defaults and python-setuptools ... that looks more ugly
[12:27] -queuebot:#ubuntu-release- New binary: python-octaviaclient [amd64] (bionic-proposed/universe) [1.1.0-1] (no packageset)
[12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [amd64] (bionic-proposed) [5.11.4-0ubuntu2]
[12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [armhf] (bionic-proposed) [5.11.4-0ubuntu2]
[12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [ppc64el] (bionic-proposed) [5.11.4-0ubuntu2]
[12:37] -queuebot:#ubuntu-release- New: accepted python-octaviaclient [amd64] (bionic-proposed) [1.1.0-1]
[12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [arm64] (bionic-proposed) [5.11.4-0ubuntu2]
[12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [s390x] (bionic-proposed) [5.11.4-0ubuntu2]
[12:37] -queuebot:#ubuntu-release- New: accepted breeze-gtk [i386] (bionic-proposed) [5.11.4-0ubuntu2]
[14:09] <tsimonq2> Qt transition incoming, shouldn't be as big as usual
[14:09] <tsimonq2> (much less ABI bumps)
[14:09] <tsimonq2> Ping me with concerns, LocutusOfBorg just pressed the button ;P
[14:13] <slashd> o/ sil2100, could you please release 'sysstat' in -updates for A/Z/X when you have a moment ? Thanks in advance
[14:13] <slashd> dgadomski, ^
[14:26] <acheronuk> Qt transition just before Xmas!
[14:26]  * acheronuk hides until holidays are over
[14:30] <sil2100> slashd: ACK, will be doing my SRU shift in a moment
[14:35] <slashd> sil2100, sure no rush thanks a lot
[14:36] <slashd> sil2100, it should be my last request for this year ;)
[14:59] <LocutusOfBorg> somewhat a test machine seems stuck...
[14:59] <LocutusOfBorg> "autopkgtest [14:08:02]: rebooting testbed after setup commands that affected boot"
[14:59] <LocutusOfBorg> I was rekicking sumo, and since one hour is stuck, the other retry went fine in a couple of minutes
[15:13] <apw> LocutusOfBorg, it will timeout and be shot eventually
[15:19] <LocutusOfBorg> I hope, thanks! however I'm wondering why did it happen :)
[15:27] <apw> LocutusOfBorg, it is a cloud, there is a lot of things that can go wrong when attempting a robot driven reboot
[16:10] <kenvandine> can someone on the sru team please publish the gnome-software SRU?  The last of the bugs have been marked as verified.
[16:16] <sil2100> I'm a bit behind with my SRU shift but I'll get to it in some moments
[17:04] <kenvandine> sil2100, thanks!
[17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [amd64] (bionic-proposed/none) [3.1.0-2] (no packageset)
[17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [i386] (bionic-proposed/none) [3.1.0-2] (no packageset)
[17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [s390x] (bionic-proposed/none) [3.1.0-2] (no packageset)
[17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [arm64] (bionic-proposed/none) [3.1.0-2] (no packageset)
[17:11] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [ppc64el] (bionic-proposed/none) [3.1.0-2] (no packageset)
[17:12] -queuebot:#ubuntu-release- New binary: prelude-lml-rules [armhf] (bionic-proposed/none) [3.1.0-2] (no packageset)
[17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [amd64] (bionic-proposed) [3.1.0-2]
[17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [armhf] (bionic-proposed) [3.1.0-2]
[17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [ppc64el] (bionic-proposed) [3.1.0-2]
[17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [arm64] (bionic-proposed) [3.1.0-2]
[17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [s390x] (bionic-proposed) [3.1.0-2]
[17:37] -queuebot:#ubuntu-release- New: accepted prelude-lml-rules [i386] (bionic-proposed) [3.1.0-2]
[18:06] <sil2100> doko: regarding the python* SRUs for zesty - could you take a look at the autopkgtest failures associated with the uploads and check if any are related to the changes that are released? Whenever an upload has this many ADT failures SRU members will not do the uploaders work of checking if they're related or not
[18:08] <doko> sil2100: pointer?
[18:11] -queuebot:#ubuntu-release- Unapproved: accepted fwupdate-signed [source] (zesty-proposed) [1.13.1]
[18:12] <sil2100> doko: https://people.canonical.com/~ubuntu-archive/pending-sru.html#zesty and find the two python uploads
[18:12] <sil2100> There's a wall of autopkgtest regressions, the uploader needs to make sure those are not related and/or fix
[18:14] <doko> sil2100: the -defaults upload can't cause this. please could you point me to the autopkg test failures for the -defaults upload for the release?
[18:14] <doko> we should compare results with this one
[18:15] <doko> sil2100: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/armhf/a/automake-1.15/20171128_000327_2fe30@/log.gz
[18:15] <doko> this is a timeout. did you really check these?
[18:15] <sil2100> doko: as I said, I did *not* check *any* of these, it's the responsibility of the *uploader*
[18:16] <doko> in this case you should reject. I'm not responsible to check for regressions caused by changes in the autopkg test infrastructure.
[18:17] <sil2100> doko: if there's too many failures we check none, we expect the uploader to follow through the upload till the end - if there are ADT regressions, the uploader needs to let us know (in the bug or anywhere) that these are not regressions or not issues caused by the upload
[18:17] <doko> sil2100: or tell me where this is documented
[18:17] <sil2100> We can't be expected to look through every failure made by every upload in every series
[18:17] <doko> sil2100: sure, but you can't expect the same from me, or do you?
[18:17] <sil2100> doko: then who should be looking into those then?
[18:18] <doko> sil2100: are autopkg tests retried for stable releases?
[18:18] <sil2100> doko: when you upload to the development series, who checks if the ADT failures are regressions or not?
[18:18] <doko> sil2100: autopkg tests are given back automatically from time to time
[18:19] <Laney> they are not
[18:20] <sil2100> doko: if they fail you can retry them, same as for devel, this doesn't happen automatically
[18:21] <doko> sil2100: is there a list of autopkg tests which were waived for xenial and zesty?
[18:22] <doko> sil2100: what should I do about https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/g/gfs2-utils/20171128_132030_67a2d@/log.gz ?
[18:22] <doko> Removing autopkgtest-satdep (0) ...
[18:22] <doko> Exit request sent.
[18:22] <doko> Creating nova instance adt-zesty-s390x-gfs2-utils-20171128-131747 from image auto-sync/ubuntu-zesty-17.04-s390x-server-20171031-disk1.img (UUID a9bfdca9-679f-43e3-8bd8-e6d36ba30522)...
[18:22] <doko> autopkgtest [13:20:29]: ERROR: erroneous package: Test dependencies are unsatisfiable. A common reason is that your testbed is out of date with respect to the archive, and you need to use a current testbed or run apt-get update or use -U.
[18:22] <doko> blame: gfs2-utils
[18:22] <doko> badpkg: Test dependencies are unsatisfiable. A common reason is that your testbed is out of date with respect to the archive, and you need to use a current testbed or run apt-get update or use -U.
[18:22] <doko> that's autopkg test infrastructure
[18:23] <doko> should I repeat that for every package and tell you that this is unrelated?
[18:23] <ahasenack> or, dlm-controld doesn't exist for s390x
[18:23] <doko> well, then it's not a regression, is it?
[18:23] <ahasenack> s390x tests were somewhat recently moved to vms (from containers), so some tests that were before not run are run now
[18:24] <ahasenack> I had a similar problem with my iproute2 upload
[18:24] <doko> sure, and the work is blamed on the uploader
[18:24] <ahasenack> well, the uploader has to check
[18:24] <ahasenack> once determined where the problem is,
[18:24] <doko> and I can't check which ones were waived before
[18:25] <ahasenack> it's not a case where the uploader introduced the problem
[18:25] <ahasenack> but he is in a position to verify that
[18:25] <doko> this doesn't scale
[18:25] <doko> without giving any reference and which uploads / tests were already waived
[18:27] <LocutusOfBorg> http://bugs.debian.org/583767 do we still need the udeb for libxml2?
[18:27] <ubot5`> Debian bug 583767 in libxml2 "Add udeb" [Wishlist,Fixed]
[18:28] <sil2100> doko: usually what I do with my uploads I check if the given test was failing/passing before in the autopkgtest history, for instance: http://autopkgtest.ubuntu.com/packages/p/python-ruffus/zesty/s390x etc.
[18:29] <sil2100> doko: if I see it failed for the first time, I would investigate to see what's up - either try a re-run or try running the test without the new package as a trigger, to see if it's broken by current release packages or by the new upload
[18:29] <doko> sil2100: this is a timeout
[18:30] <doko> sil2100: please could you restore the infrastructure to the state when the test was successfully run, and then retry?
[18:30] <doko> no, you probably cannot
[18:30] <sil2100> doko: we can re-run it without a new package trigger which will test it against what's in the -updates pocket
[18:30] <doko> but please show me one failing test which is not a direct autopkg test issue or a timeout error
[18:31] <sil2100> doko: but anyway, if a test is obviously a timeout then we need to be informed about that as it's the uploaders responsibility to check the autopkgtest results
[18:31] <doko> sil2100: what package should get bug reports for the autopkg test infrastructure?
[18:35] <sil2100> doko: you can file a bug against autopkgtests but what the SRU team needs is: if some tests are unrelated infra issues, write down in the SRU bug which failures have been assessed to being infra-related or not
[18:35] <sil2100> It's of course best if the tests are simply re-run and passing to make sure there are no regressions
[18:35] <doko> sil2100: unless you tell me one or two packages triggered by python-defaults which are not timeouts or dependency issues I will not do that
[18:36] <sil2100> doko: ok, then I will not release your package as it's your responsibility to look after the autopkgtests triggered by your upload
[18:36] <sil2100> Maybe rbasak or slangasek have a different take on this
[18:37] <doko> slangasek, Laney, infinity, rbasak: I don't think that the uploader should be responsible to figure out these issues, these are infrastructure issues. please feel free to raise these with the appropriate teams
[18:38] <doko> sil2100: I'll look at the python2.7 caused autopkg test failures, but please could you give me a list of waived autopkg test failures, so I don't have to investigate duplicates?
[18:38] <sil2100> doko: but seeing that the python packages are verified and in -proposed since 20 days and not released by anyone yet, I think others might have jumped over the package as well because of this
[18:39] <sil2100> Ok, I guess I can help out in scanning those
[18:40] <doko> sil2100: again, why should I be responsible for infrastructure issues? so the first problem is that a) nobody is aware of these, b) nobody reports bugs on these c) the SRU process is blocked on these
[18:40] <sil2100> doko: in devel, are you also not responsible for such things? I mean, if in bionic a test fails due to infra, someone has to re-run it right?
[18:41] <sil2100> doko: and I think then it's the uploader's responsibility to either hint it in or re-run it
[18:41] <sil2100> doko: yes, other core-devs do that for people, but we send e-mails for packages stuck in -proposed to the uploaders for a reason
[18:42] <doko> sil2100: does hinting really help for infrastructure issues? no
[18:42] <sil2100> doko: but tell me
[18:42] <sil2100> doko: what are you doing when your package is stuck in bionic
[18:43] <sil2100> doko: or differently, if you upload to bionic and it gets stuck on infra, what happens?
[18:43] <sil2100> doko: who re-runs it? Who hints it in? Who talks to the infrastructure people?
[18:43] <sil2100> doko: it's usually the uploader
[18:43] <sil2100> It should be the uploader
[18:43] <doko> which is the wrong person
[18:43] <sil2100> Then who does it?
[18:44] <doko> *it* *does* *not* *scale*
[18:44] <sil2100> If not the uploader, who is supposed to scan through your upload's failed autopkgtests, assess if they're infra-related and re-run?
[18:44] <sil2100> But tell me who does it for bionic?
[18:44] <cjwatson> having infrastructure people go through all failure logs and work out which ones are their responsibility scales even less well, in general
[18:44] <doko> sil2100: please start triggering autopkg tests on debhelper, and fix these issues, then I will fix mine
[18:45] <cjwatson> it probably also doesn't help when problems are misattributed to infrastructure, as above ...
[18:46] <doko> maybe it doesn't help, but I'm tired to clean up things accumulated during a long time, and then once triggered
[18:50] <doko> cjwatson: am I wrong calling the gfs2-utils one an infrastructure issue?
[18:59] <cjwatson> doko: I haven't looked particularly extensively, but unsatisfiable dependencies aren't normally an infrastructure bug; that claim would need special evidence to support it, I think
[19:00] <cjwatson> (the fact that it's tagged as a regression when the individual test in question was AIUI not previously run might be an infrastructure bug; it sounds like reverting the infrastructure would be unequivocally the wrong response, though ...)
[19:03] <doko> cjwatson: sure, that doesn't make sense. but confronting me with a list of failures which simply can't be triggered by that change doesn't make sense either
[19:03] <doko> looked now at the ones triggered by python2.7. half of it triggered by "not-python2.7" issues, looking at the other ones
[19:04] <cjwatson> I'm not sure it's possible for infrastructure to reasonably tell the difference between "regression induced by infrastructure change that caused new tests to become runnable" and "new test is genuinely busted and may indicate a package bug"
[19:04] <cjwatson> other than by punting it for human investigation
[19:05] <doko> ok, so who is supposed to do that? again, I don't think it should be the uploader of a package triggering some hundred autopkg tests
[19:06] <doko> sil2100: how do you treat "acceptable" autopkg test failures, and how do you mark them?
[19:06] <cjwatson> of the obvious options, the uploader is surely the option that scales best, unless we can work out how to farm that out to the maintainers of the packages whose tests are failing (bearing in mind that those will often be packages unchanged in Ubuntu) ...
[19:06] <cjwatson> my only point is really that having infra maintainers do it scales even worse
[19:07] <doko> sure, but then I think we should just do a no-change debhelper upload to the release pockets, and see what autopkg test failures are triggered
[19:07] <doko> and use that as a reference what is expected to fail
[19:08] <ginggs> who takes care of inrastructure bugs when a package is auto sync'd?
[19:08] <cjwatson> it usually winds up being whoever gets blocked by the failures, or people chipping off bits and pieces of the problem as they have time ...
[19:09] <slangasek> doko: known badtests are marked via britney hints, either lp:~ubuntu-release/britney/hints-ubuntu/ for devel or lp:~ubuntu-sru/britney/hints-ubuntu-$release for srus.
[19:09] <ginggs> see r-cran-future and r-cran-openssl - tests pass locally and on debian infra
[19:09] <cjwatson> and obviously if it's actually infrastructure bugs then hopefully somebody consolidates the pile of logs into a form that can be reported to infra maintainers
[19:10] <cjwatson> bear in mind that "passes on Debian and fails on Ubuntu" is not necessarily quite enough to prove an infra bug though
[19:10] <cjwatson> it's doubtless a bug somewhere, but could e.g. depend on the virtualisation layer, or be a race, or be unwarranted sensitivity to details of the CPU, or ...
[19:10] <ginggs> cjwatson: fair enough, but where to go from here?
[19:11] <slangasek> ginggs: r-cran-openssl, clearly trying to
[19:11] <slangasek> ginggs: make an outbound internet connection
[19:11] <cjwatson> I don't have specific advice; when I've had this sort of problem I've usually either worked harder to reproduce the exact way the Ubuntu test infrastructure is set up locally, or added extra debugging to the package
[19:11] <doko> sure, but the recent uninstallation issues in bionic were confirmed to exist, and no working solution was found until the affected packages migrated to the release pocket (using manual give backs for all affected tests)
[19:11] <ginggs> slangasek: i thought that was allowed in autopkgtests (but not during build)
[19:12] <slangasek> ginggs: via proxy only, which these tests are clearly not picking up correctly from the environment
[19:13] <doko> slangasek: this is a typical issue I see: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/a/alembic/20171204_202448_988c2@/log.gz
[19:13] <doko> how to address it?
[19:14] <doko> another one https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/p/python-cinderclient/20171204_204039_d09ea@/log.gz
[19:15] <slangasek> doko: you say that's typical?  that one very much looks like an infrastructure bug that should just be re-tried (I'm clicking the retry button now)
[19:16] <cjwatson> s/re-tried/chased down/?  doesn't look like a natural race symptom
[19:16] <slangasek> the first one was a testbed failure of some description
[19:16] <slangasek> well, the second also
[19:16] <cjwatson> right, but it's pretty scary non-determinism?
[19:17] <slangasek> perhaps that's related to having failed to generate autopkgtest images that have these packages pre-removed
[19:20] <slangasek> when we were having problems last week due to all the images being gone from bos02, Laney mentioned that autopkgtests pick the most recent image by date regardless of whether it's a base cloud image or an adt image; which seems confusing and wrong to me
[19:25] <doko> slangasek: I was looking at https://people.canonical.com/~ubuntu-archive/pending-sru.html#zesty for the ones triggered by python-defaults
[19:26] <doko> sil2100: these 20 days are not accurate. you need to count the time starting with the last autopkg test run
[19:28] <slangasek> doko: I think http://people.canonical.com/~ubuntu-archive/proposed-migration/zesty/update_excuses.html#python-defaults provides a better view on this
[19:30] <doko> slangasek: can we automatically retry autopkg tests with an "unknown" version number?
[19:33] <slangasek> doko: seems like something we might want to report + batch; I wouldn't want to do it entirely without oversight because sometimes those failures represent some problem that is knocking down the test runners, so auto-retrying is going to starve the rest of the queue
[19:34] <slangasek> at the moment, the problem I'm facing with report+batch is that the cron emails are successfully delivered to Laney but don't make it to me :P
[19:34] <doko> ouch
[19:38] <slangasek> the bzr failures are quite odd, and need confirming whether this has mysteriously regressed in -updates or if it's actually triggered somehow by the python-defaults SRU
[19:39] <slangasek> (triggered)
[21:03] <Laney> It is wrong, that's why I had filed a bug for it - and outlined how I thought it could be solved in the bug.
[21:03]  * Laney isn't here lalal
[21:03] <slangasek> :)