/srv/irclogs.ubuntu.com/2017/12/18/#ubuntu-release.txt

=== s8321414_ is now known as s8321414
-queuebot:#ubuntu-release- New binary: cura [amd64] (bionic-proposed/universe) [3.0.3-2] (no packageset)05:12
-queuebot:#ubuntu-release- New binary: mess-desktop-entries [amd64] (bionic-proposed/none) [0.2-3] (no packageset)05:12
-queuebot:#ubuntu-release- New binary: getmail [amd64] (bionic-proposed/none) [5.4-1] (no packageset)05:12
-queuebot:#ubuntu-release- New binary: writerperfect [amd64] (bionic-proposed/universe) [0.9.6-1] (no packageset)05:13
-queuebot:#ubuntu-release- New binary: writerperfect [ppc64el] (bionic-proposed/universe) [0.9.6-1] (no packageset)05:14
-queuebot:#ubuntu-release- New binary: writerperfect [s390x] (bionic-proposed/universe) [0.9.6-1] (no packageset)05:15
-queuebot:#ubuntu-release- New binary: writerperfect [i386] (bionic-proposed/universe) [0.9.6-1] (no packageset)05:16
-queuebot:#ubuntu-release- New binary: writerperfect [arm64] (bionic-proposed/universe) [0.9.6-1] (no packageset)05:19
-queuebot:#ubuntu-release- New binary: writerperfect [armhf] (bionic-proposed/universe) [0.9.6-1] (no packageset)05:20
=== NCommander is now known as KD2JRT
-queuebot:#ubuntu-release- Unapproved: cockpit (artful-backports/universe) [157-1~ubuntu17.10.1 => 158-1~ubuntu17.10.1] (no packageset)07:10
-queuebot:#ubuntu-release- Unapproved: cockpit (zesty-backports/universe) [157-1~ubuntu17.04.1 => 158-1~ubuntu17.04.1] (no packageset)07:12
-queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (artful-backports) [158-1~ubuntu17.10.1]07:16
-queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (zesty-backports) [158-1~ubuntu17.04.1]07:17
-queuebot:#ubuntu-release- Unapproved: cockpit (xenial-backports/universe) [157-1~ubuntu16.04.1 => 158-1~ubuntu16.04.1] (no packageset)07:17
-queuebot:#ubuntu-release- Unapproved: accepted cockpit [source] (xenial-backports) [158-1~ubuntu16.04.1]07:37
=== KD2JRT is now known as NCommander
-queuebot:#ubuntu-release- Unapproved: livecd-rootfs (xenial-proposed/main) [2.408.25 => 2.408.26] (desktop-core)09:13
-queuebot:#ubuntu-release- New: accepted writerperfect [amd64] (bionic-proposed) [0.9.6-1]10:25
-queuebot:#ubuntu-release- New: accepted writerperfect [armhf] (bionic-proposed) [0.9.6-1]10:25
-queuebot:#ubuntu-release- New: accepted writerperfect [ppc64el] (bionic-proposed) [0.9.6-1]10:25
-queuebot:#ubuntu-release- New: accepted writerperfect [arm64] (bionic-proposed) [0.9.6-1]10:25
-queuebot:#ubuntu-release- New: accepted writerperfect [s390x] (bionic-proposed) [0.9.6-1]10:25
-queuebot:#ubuntu-release- New: accepted writerperfect [i386] (bionic-proposed) [0.9.6-1]10:25
-queuebot:#ubuntu-release- New: accepted qupzilla [amd64] (bionic-proposed) [2.2.0~dfsg1-3]10:25
-queuebot:#ubuntu-release- New: accepted qupzilla [armhf] (bionic-proposed) [2.2.0~dfsg1-3]10:25
-queuebot:#ubuntu-release- New: accepted qupzilla [amd64] (bionic-proposed) [2.2.2~dfsg1-1]10:25
-queuebot:#ubuntu-release- New: accepted qupzilla [armhf] (bionic-proposed) [2.2.2~dfsg1-1]10:25
-queuebot:#ubuntu-release- New: accepted qupzilla [arm64] (bionic-proposed) [2.2.0~dfsg1-3]10:25
-queuebot:#ubuntu-release- New: accepted qupzilla [arm64] (bionic-proposed) [2.2.2~dfsg1-1]10:25
-queuebot:#ubuntu-release- New: accepted qupzilla [i386] (bionic-proposed) [2.2.0~dfsg1-3]10:25
-queuebot:#ubuntu-release- New: accepted qupzilla [i386] (bionic-proposed) [2.2.2~dfsg1-1]10:26
-queuebot:#ubuntu-release- New: accepted getmail [amd64] (bionic-proposed) [5.4-1]10:26
-queuebot:#ubuntu-release- New: accepted mess-desktop-entries [amd64] (bionic-proposed) [0.2-3]10:26
-queuebot:#ubuntu-release- New: accepted cura [amd64] (bionic-proposed) [3.0.3-2]10:26
-queuebot:#ubuntu-release- New: accepted haskell-store [amd64] (bionic-proposed) [0.4.3.2-1]10:27
-queuebot:#ubuntu-release- New: accepted haskell-store [ppc64el] (bionic-proposed) [0.4.3.2-1]10:27
dokolooking to get sphinx migrated. There are two failing autopkg tests10:55
doko - svgpp/1.2.3+dfsg1-3/s390x unrelated to sphinx11:03
doko -  monkeysign/2.2.3/armhf  - unrelated to sphinx, error connecting key server?11:07
dokoboth tests only succeeded once, so please consider overriding or resetting these tests11:07
-queuebot:#ubuntu-release- New binary: breeze-gtk [s390x] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)11:13
-queuebot:#ubuntu-release- New binary: breeze-gtk [ppc64el] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)11:13
-queuebot:#ubuntu-release- New binary: breeze-gtk [i386] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)11:14
-queuebot:#ubuntu-release- New binary: breeze-gtk [armhf] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)11:15
apwdoko, hi ..11:16
-queuebot:#ubuntu-release- New binary: breeze-gtk [amd64] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)11:17
apwdoko, as those are the only two regressions on sphinx it migth be approriate to skiptest sphinx11:18
apwdoko, so one appears to be a keyserver test which is failing, and the other the runner running out of memory11:18
apwdoko, so that seems fine to my eye11:18
apwdoko, hinted11:20
-queuebot:#ubuntu-release- New binary: breeze-gtk [arm64] (bionic-proposed/universe) [5.11.4-0ubuntu2] (kubuntu)11:23
dokota11:25
dokoso the next ones would be python3-defaults and python-setuptools ... that looks more ugly11:26
-queuebot:#ubuntu-release- New binary: python-octaviaclient [amd64] (bionic-proposed/universe) [1.1.0-1] (no packageset)12:27
-queuebot:#ubuntu-release- New: accepted breeze-gtk [amd64] (bionic-proposed) [5.11.4-0ubuntu2]12:37
-queuebot:#ubuntu-release- New: accepted breeze-gtk [armhf] (bionic-proposed) [5.11.4-0ubuntu2]12:37
-queuebot:#ubuntu-release- New: accepted breeze-gtk [ppc64el] (bionic-proposed) [5.11.4-0ubuntu2]12:37
-queuebot:#ubuntu-release- New: accepted python-octaviaclient [amd64] (bionic-proposed) [1.1.0-1]12:37
-queuebot:#ubuntu-release- New: accepted breeze-gtk [arm64] (bionic-proposed) [5.11.4-0ubuntu2]12:37
-queuebot:#ubuntu-release- New: accepted breeze-gtk [s390x] (bionic-proposed) [5.11.4-0ubuntu2]12:37
-queuebot:#ubuntu-release- New: accepted breeze-gtk [i386] (bionic-proposed) [5.11.4-0ubuntu2]12:37
tsimonq2Qt transition incoming, shouldn't be as big as usual14:09
tsimonq2(much less ABI bumps)14:09
tsimonq2Ping me with concerns, LocutusOfBorg just pressed the button ;P14:09
slashdo/ sil2100, could you please release 'sysstat' in -updates for A/Z/X when you have a moment ? Thanks in advance14:13
slashddgadomski, ^14:13
acheronukQt transition just before Xmas!14:26
* acheronuk hides until holidays are over14:26
sil2100slashd: ACK, will be doing my SRU shift in a moment14:30
slashdsil2100, sure no rush thanks a lot14:35
slashdsil2100, it should be my last request for this year ;)14:36
LocutusOfBorgsomewhat a test machine seems stuck...14:59
LocutusOfBorg"autopkgtest [14:08:02]: rebooting testbed after setup commands that affected boot"14:59
LocutusOfBorgI was rekicking sumo, and since one hour is stuck, the other retry went fine in a couple of minutes14:59
apwLocutusOfBorg, it will timeout and be shot eventually15:13
LocutusOfBorgI hope, thanks! however I'm wondering why did it happen :)15:19
apwLocutusOfBorg, it is a cloud, there is a lot of things that can go wrong when attempting a robot driven reboot15:27
kenvandinecan someone on the sru team please publish the gnome-software SRU?  The last of the bugs have been marked as verified.16:10
sil2100I'm a bit behind with my SRU shift but I'll get to it in some moments16:16
kenvandinesil2100, thanks!17:04
-queuebot:#ubuntu-release- New binary: prelude-lml-rules [amd64] (bionic-proposed/none) [3.1.0-2] (no packageset)17:11
-queuebot:#ubuntu-release- New binary: prelude-lml-rules [i386] (bionic-proposed/none) [3.1.0-2] (no packageset)17:11
-queuebot:#ubuntu-release- New binary: prelude-lml-rules [s390x] (bionic-proposed/none) [3.1.0-2] (no packageset)17:11
-queuebot:#ubuntu-release- New binary: prelude-lml-rules [arm64] (bionic-proposed/none) [3.1.0-2] (no packageset)17:11
-queuebot:#ubuntu-release- New binary: prelude-lml-rules [ppc64el] (bionic-proposed/none) [3.1.0-2] (no packageset)17:11
-queuebot:#ubuntu-release- New binary: prelude-lml-rules [armhf] (bionic-proposed/none) [3.1.0-2] (no packageset)17:12
-queuebot:#ubuntu-release- New: accepted prelude-lml-rules [amd64] (bionic-proposed) [3.1.0-2]17:37
-queuebot:#ubuntu-release- New: accepted prelude-lml-rules [armhf] (bionic-proposed) [3.1.0-2]17:37
-queuebot:#ubuntu-release- New: accepted prelude-lml-rules [ppc64el] (bionic-proposed) [3.1.0-2]17:37
-queuebot:#ubuntu-release- New: accepted prelude-lml-rules [arm64] (bionic-proposed) [3.1.0-2]17:37
-queuebot:#ubuntu-release- New: accepted prelude-lml-rules [s390x] (bionic-proposed) [3.1.0-2]17:37
-queuebot:#ubuntu-release- New: accepted prelude-lml-rules [i386] (bionic-proposed) [3.1.0-2]17:37
sil2100doko: regarding the python* SRUs for zesty - could you take a look at the autopkgtest failures associated with the uploads and check if any are related to the changes that are released? Whenever an upload has this many ADT failures SRU members will not do the uploaders work of checking if they're related or not18:06
dokosil2100: pointer?18:08
-queuebot:#ubuntu-release- Unapproved: accepted fwupdate-signed [source] (zesty-proposed) [1.13.1]18:11
sil2100doko: https://people.canonical.com/~ubuntu-archive/pending-sru.html#zesty and find the two python uploads18:12
sil2100There's a wall of autopkgtest regressions, the uploader needs to make sure those are not related and/or fix18:12
dokosil2100: the -defaults upload can't cause this. please could you point me to the autopkg test failures for the -defaults upload for the release?18:14
dokowe should compare results with this one18:14
dokosil2100: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/armhf/a/automake-1.15/20171128_000327_2fe30@/log.gz18:15
dokothis is a timeout. did you really check these?18:15
sil2100doko: as I said, I did *not* check *any* of these, it's the responsibility of the *uploader*18:15
dokoin this case you should reject. I'm not responsible to check for regressions caused by changes in the autopkg test infrastructure.18:16
sil2100doko: if there's too many failures we check none, we expect the uploader to follow through the upload till the end - if there are ADT regressions, the uploader needs to let us know (in the bug or anywhere) that these are not regressions or not issues caused by the upload18:17
dokosil2100: or tell me where this is documented18:17
sil2100We can't be expected to look through every failure made by every upload in every series18:17
dokosil2100: sure, but you can't expect the same from me, or do you?18:17
sil2100doko: then who should be looking into those then?18:17
dokosil2100: are autopkg tests retried for stable releases?18:18
sil2100doko: when you upload to the development series, who checks if the ADT failures are regressions or not?18:18
dokosil2100: autopkg tests are given back automatically from time to time18:18
Laneythey are not18:19
sil2100doko: if they fail you can retry them, same as for devel, this doesn't happen automatically18:20
dokosil2100: is there a list of autopkg tests which were waived for xenial and zesty?18:21
dokosil2100: what should I do about https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/g/gfs2-utils/20171128_132030_67a2d@/log.gz ?18:22
dokoRemoving autopkgtest-satdep (0) ...18:22
dokoExit request sent.18:22
dokoCreating nova instance adt-zesty-s390x-gfs2-utils-20171128-131747 from image auto-sync/ubuntu-zesty-17.04-s390x-server-20171031-disk1.img (UUID a9bfdca9-679f-43e3-8bd8-e6d36ba30522)...18:22
dokoautopkgtest [13:20:29]: ERROR: erroneous package: Test dependencies are unsatisfiable. A common reason is that your testbed is out of date with respect to the archive, and you need to use a current testbed or run apt-get update or use -U.18:22
dokoblame: gfs2-utils18:22
dokobadpkg: Test dependencies are unsatisfiable. A common reason is that your testbed is out of date with respect to the archive, and you need to use a current testbed or run apt-get update or use -U.18:22
dokothat's autopkg test infrastructure18:22
dokoshould I repeat that for every package and tell you that this is unrelated?18:23
ahasenackor, dlm-controld doesn't exist for s390x18:23
dokowell, then it's not a regression, is it?18:23
ahasenacks390x tests were somewhat recently moved to vms (from containers), so some tests that were before not run are run now18:23
ahasenackI had a similar problem with my iproute2 upload18:24
dokosure, and the work is blamed on the uploader18:24
ahasenackwell, the uploader has to check18:24
ahasenackonce determined where the problem is,18:24
dokoand I can't check which ones were waived before18:24
ahasenackit's not a case where the uploader introduced the problem18:25
ahasenackbut he is in a position to verify that18:25
dokothis doesn't scale18:25
dokowithout giving any reference and which uploads / tests were already waived18:25
LocutusOfBorghttp://bugs.debian.org/583767 do we still need the udeb for libxml2?18:27
ubot5`Debian bug 583767 in libxml2 "Add udeb" [Wishlist,Fixed]18:27
sil2100doko: usually what I do with my uploads I check if the given test was failing/passing before in the autopkgtest history, for instance: http://autopkgtest.ubuntu.com/packages/p/python-ruffus/zesty/s390x etc.18:28
sil2100doko: if I see it failed for the first time, I would investigate to see what's up - either try a re-run or try running the test without the new package as a trigger, to see if it's broken by current release packages or by the new upload18:29
dokosil2100: this is a timeout18:29
dokosil2100: please could you restore the infrastructure to the state when the test was successfully run, and then retry?18:30
dokono, you probably cannot18:30
sil2100doko: we can re-run it without a new package trigger which will test it against what's in the -updates pocket18:30
dokobut please show me one failing test which is not a direct autopkg test issue or a timeout error18:30
sil2100doko: but anyway, if a test is obviously a timeout then we need to be informed about that as it's the uploaders responsibility to check the autopkgtest results18:31
dokosil2100: what package should get bug reports for the autopkg test infrastructure?18:31
sil2100doko: you can file a bug against autopkgtests but what the SRU team needs is: if some tests are unrelated infra issues, write down in the SRU bug which failures have been assessed to being infra-related or not18:35
sil2100It's of course best if the tests are simply re-run and passing to make sure there are no regressions18:35
dokosil2100: unless you tell me one or two packages triggered by python-defaults which are not timeouts or dependency issues I will not do that18:35
sil2100doko: ok, then I will not release your package as it's your responsibility to look after the autopkgtests triggered by your upload18:36
sil2100Maybe rbasak or slangasek have a different take on this18:36
dokoslangasek, Laney, infinity, rbasak: I don't think that the uploader should be responsible to figure out these issues, these are infrastructure issues. please feel free to raise these with the appropriate teams18:37
dokosil2100: I'll look at the python2.7 caused autopkg test failures, but please could you give me a list of waived autopkg test failures, so I don't have to investigate duplicates?18:38
sil2100doko: but seeing that the python packages are verified and in -proposed since 20 days and not released by anyone yet, I think others might have jumped over the package as well because of this18:38
sil2100Ok, I guess I can help out in scanning those18:39
dokosil2100: again, why should I be responsible for infrastructure issues? so the first problem is that a) nobody is aware of these, b) nobody reports bugs on these c) the SRU process is blocked on these18:40
sil2100doko: in devel, are you also not responsible for such things? I mean, if in bionic a test fails due to infra, someone has to re-run it right?18:40
sil2100doko: and I think then it's the uploader's responsibility to either hint it in or re-run it18:41
sil2100doko: yes, other core-devs do that for people, but we send e-mails for packages stuck in -proposed to the uploaders for a reason18:41
dokosil2100: does hinting really help for infrastructure issues? no18:42
sil2100doko: but tell me18:42
sil2100doko: what are you doing when your package is stuck in bionic18:42
sil2100doko: or differently, if you upload to bionic and it gets stuck on infra, what happens?18:43
sil2100doko: who re-runs it? Who hints it in? Who talks to the infrastructure people?18:43
sil2100doko: it's usually the uploader18:43
sil2100It should be the uploader18:43
dokowhich is the wrong person18:43
sil2100Then who does it?18:43
doko*it* *does* *not* *scale*18:44
sil2100If not the uploader, who is supposed to scan through your upload's failed autopkgtests, assess if they're infra-related and re-run?18:44
sil2100But tell me who does it for bionic?18:44
cjwatsonhaving infrastructure people go through all failure logs and work out which ones are their responsibility scales even less well, in general18:44
dokosil2100: please start triggering autopkg tests on debhelper, and fix these issues, then I will fix mine18:44
cjwatsonit probably also doesn't help when problems are misattributed to infrastructure, as above ...18:45
dokomaybe it doesn't help, but I'm tired to clean up things accumulated during a long time, and then once triggered18:46
dokocjwatson: am I wrong calling the gfs2-utils one an infrastructure issue?18:50
cjwatsondoko: I haven't looked particularly extensively, but unsatisfiable dependencies aren't normally an infrastructure bug; that claim would need special evidence to support it, I think18:59
cjwatson(the fact that it's tagged as a regression when the individual test in question was AIUI not previously run might be an infrastructure bug; it sounds like reverting the infrastructure would be unequivocally the wrong response, though ...)19:00
dokocjwatson: sure, that doesn't make sense. but confronting me with a list of failures which simply can't be triggered by that change doesn't make sense either19:03
dokolooked now at the ones triggered by python2.7. half of it triggered by "not-python2.7" issues, looking at the other ones19:03
cjwatsonI'm not sure it's possible for infrastructure to reasonably tell the difference between "regression induced by infrastructure change that caused new tests to become runnable" and "new test is genuinely busted and may indicate a package bug"19:04
cjwatsonother than by punting it for human investigation19:04
dokook, so who is supposed to do that? again, I don't think it should be the uploader of a package triggering some hundred autopkg tests19:05
dokosil2100: how do you treat "acceptable" autopkg test failures, and how do you mark them?19:06
cjwatsonof the obvious options, the uploader is surely the option that scales best, unless we can work out how to farm that out to the maintainers of the packages whose tests are failing (bearing in mind that those will often be packages unchanged in Ubuntu) ...19:06
cjwatsonmy only point is really that having infra maintainers do it scales even worse19:06
dokosure, but then I think we should just do a no-change debhelper upload to the release pockets, and see what autopkg test failures are triggered19:07
dokoand use that as a reference what is expected to fail19:07
ginggswho takes care of inrastructure bugs when a package is auto sync'd?19:08
cjwatsonit usually winds up being whoever gets blocked by the failures, or people chipping off bits and pieces of the problem as they have time ...19:08
slangasekdoko: known badtests are marked via britney hints, either lp:~ubuntu-release/britney/hints-ubuntu/ for devel or lp:~ubuntu-sru/britney/hints-ubuntu-$release for srus.19:09
ginggssee r-cran-future and r-cran-openssl - tests pass locally and on debian infra19:09
cjwatsonand obviously if it's actually infrastructure bugs then hopefully somebody consolidates the pile of logs into a form that can be reported to infra maintainers19:09
cjwatsonbear in mind that "passes on Debian and fails on Ubuntu" is not necessarily quite enough to prove an infra bug though19:10
cjwatsonit's doubtless a bug somewhere, but could e.g. depend on the virtualisation layer, or be a race, or be unwarranted sensitivity to details of the CPU, or ...19:10
ginggscjwatson: fair enough, but where to go from here?19:10
slangasekginggs: r-cran-openssl, clearly trying to19:11
slangasekginggs: make an outbound internet connection19:11
cjwatsonI don't have specific advice; when I've had this sort of problem I've usually either worked harder to reproduce the exact way the Ubuntu test infrastructure is set up locally, or added extra debugging to the package19:11
dokosure, but the recent uninstallation issues in bionic were confirmed to exist, and no working solution was found until the affected packages migrated to the release pocket (using manual give backs for all affected tests)19:11
ginggsslangasek: i thought that was allowed in autopkgtests (but not during build)19:11
slangasekginggs: via proxy only, which these tests are clearly not picking up correctly from the environment19:12
dokoslangasek: this is a typical issue I see: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/a/alembic/20171204_202448_988c2@/log.gz19:13
dokohow to address it?19:13
dokoanother one https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/s390x/p/python-cinderclient/20171204_204039_d09ea@/log.gz19:14
slangasekdoko: you say that's typical?  that one very much looks like an infrastructure bug that should just be re-tried (I'm clicking the retry button now)19:15
cjwatsons/re-tried/chased down/?  doesn't look like a natural race symptom19:16
slangasekthe first one was a testbed failure of some description19:16
slangasekwell, the second also19:16
cjwatsonright, but it's pretty scary non-determinism?19:16
slangasekperhaps that's related to having failed to generate autopkgtest images that have these packages pre-removed19:17
slangasekwhen we were having problems last week due to all the images being gone from bos02, Laney mentioned that autopkgtests pick the most recent image by date regardless of whether it's a base cloud image or an adt image; which seems confusing and wrong to me19:20
dokoslangasek: I was looking at https://people.canonical.com/~ubuntu-archive/pending-sru.html#zesty for the ones triggered by python-defaults19:25
dokosil2100: these 20 days are not accurate. you need to count the time starting with the last autopkg test run19:26
slangasekdoko: I think http://people.canonical.com/~ubuntu-archive/proposed-migration/zesty/update_excuses.html#python-defaults provides a better view on this19:28
dokoslangasek: can we automatically retry autopkg tests with an "unknown" version number?19:30
slangasekdoko: seems like something we might want to report + batch; I wouldn't want to do it entirely without oversight because sometimes those failures represent some problem that is knocking down the test runners, so auto-retrying is going to starve the rest of the queue19:33
slangasekat the moment, the problem I'm facing with report+batch is that the cron emails are successfully delivered to Laney but don't make it to me :P19:34
dokoouch19:34
slangasekthe bzr failures are quite odd, and need confirming whether this has mysteriously regressed in -updates or if it's actually triggered somehow by the python-defaults SRU19:38
slangasek(triggered)19:39
=== kmously is now known as kmously-afk
=== ogasawara is now known as Guest76312
LaneyIt is wrong, that's why I had filed a bug for it - and outlined how I thought it could be solved in the bug.21:03
* Laney isn't here lalal21:03
slangasek:)21:03

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!