[11:33] <seb128> xnox, hey, pointing bug #1829829 in case it's useful, saw it while doing some triaging
[11:33] <seb128> but I guess that's a probably an issue you are already aware of?
[11:36] <seb128> rbalint, btw did you see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=929229 (was raised on bug #1803993)
[11:41] <xnox> seb128:  i am aware, do not have solution/fix yet. didn't investigate yet.
[11:41] <seb128> k
[11:41] <xnox> seb128:  rbalint: re X regression. rbalint should we revert the upload in eoan?
[11:59] <rbalint> xnox, seb128 yes, i saw it and i started looking into it
[12:01] <rbalint> xnox, seb128: the regression still seems to be better than leaking passwords and breaking keyboard on gdm after a few logouts, but i'm also open to reverting it until everything is sorted out
[12:03] <rbalint> xnox, with or without the revert i think it would be be beneficial to have ddstreet's autopkgtest fixes in eoan and in old releases even going in ahead of the VT fix
[12:04] <seb128> rbalint, is the autopkgtest issue understood/has a fix? see backlog, x_nox said it was not investigated yet?
[12:08] <xnox> seb128:  there are some pending autopkgtest fixes; and i think there are some new regressions too.
[12:09] <rbalint> seb128, there is an approved mp https://code.launchpad.net/~ddstreet/ubuntu/+source/systemd/+git/systemd/+merge/366857 and i think xnox referred to the vt regression as not being fully understood
[12:10] <rbalint> seb128, xnox locally systemd autopkgtest passed for me for cosmic and up, but storage test always failed in bionic
[12:10] <xnox> rbalint:  storage test is worked on; so that's ok.
[12:12] <ddstreet> seb128 re: lp #1829829 the problem is most of the time after the testcase issues a reboot, autopkgtest-virt-ssh can't reconnect to the testbed, but it's only for amd64 and i386 archs
[12:13] <ddstreet> i suspected there's some problem with the prodstack intel-based instances being used, since i think the other archs use different testbed deployment methods, but i have no access to any of it so i can't tell
[13:27] <seb128> ddstreet, did you try to ask vorlon / Laney / juliank about the prodstack thing?
[13:28] <Laney> you mean scalingstack, fwiw
[13:29] <Laney> xnox got pinged about that and it is on his list to look at
[13:29] <seb128> thx
[15:42] <jamespage> wgrant, xnox: do you happen to know how big an filesystem and PPA builder has? I think I've just seen one pop with builds for latest ceph release
[15:43] <xnox> jamespage:  i can't remember, maybe it was like something like 40gb or 60gb
[15:46] <cjwatson> jamespage: 60GB I believe
[15:46] <jamespage> ok
[15:47] <jamespage> monitoring how big it gets to see
[15:47] <cjwatson> ceph wouldn't have been the first source package I'd pick to be likely to run it out of disk
[15:48] <cjwatson> though OK, last build in disco apparently took nearly 50GB
[15:48] <jamespage> I got a load of unable to copy file errors during the install pahse
[15:48] <cjwatson> some time spent working out what's quite so fat might be well-spent
[15:49] <cjwatson> expanding requires having cloud space to expand all the instances so may not be straightforward
[15:49] <jamespage> ack - will dig on this - I have one suspect that might make a difference
[15:51] <xnox> i hit disk limit before when mongo accidentaly went to build every single little test case binary, with 2GB of debug symbols statically linked....
[15:55] <jamespage> hmm
[15:55] <jamespage> eoan-amd64      915G   63G  806G   8% /run/schroot/mount/eoan-amd64-f0f14aa3-e236-4c78-a327-37af1bff86ad
[19:40] <ahasenack> seb128: thanks for triaging https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1829772
[19:40] <seb128> ahasenack, np!
[19:41] <seb128> ahasenack, there has been a few samba segfault report in recent bugs, I wonder if of the recent security update has a regression
[19:41] <seb128> ahasenack, but those reports don't really have enough data so it's difficult to say...
[19:41] <ahasenack> yeah
[19:42] <ahasenack> and no core attached
[19:45] <ahasenack> "It appears to be caused by receiving SMB requests from the Internet." wrote one
[19:45] <seb128> juliank, hey, have you seen reports like https://errors.ubuntu.com/problem/bcd4cb93a6d3f4bb93cc6ea7534e293f4a744849 before?
[19:45] <seb128> it was flagged as a software-properties regression in the recent bionic SRU but that code seems rather aptdaemon/existing
[19:46] <seb128>     from aptdaemon import client
[19:46] <seb128>   File "/usr/lib/python3/dist-packages/aptdaemon/client.py", line 1570
[19:46] <seb128>     async = reply_handler and error_handler
[19:46] <seb128> which triggers a
[19:46] <seb128> SyntaxError: invalid syntax
[19:48] <juliank> seb128: ack, another python3.7 ? issue
[19:49] <juliank> Haven't seen them, but async became a keyword or something
[19:49] <juliank> So gotta rename async to async_handler or similar
[19:51] <seb128> why is that only an issue now?
[19:52] <seb128> hum, my bionic machine is python3.6
[19:53] <seb128> did those users change their default python?
[19:53] <seb128> ah, that was fixed in https://launchpad.net/ubuntu/+source/aptdaemon/1.1.1+bzr982-0ubuntu20
[19:53] <seb128> that should be SRUed to bionic I guess then, if we have users updating their python to 3.7 in that serie (or are we doing that for them?)
[19:57] <juliank> seb128: people do sometimes mess up their systems, yes
[20:02] <seb128> juliank, mwhudson, can you handle SRUing that to bionic?
[20:02] <seb128> it looks like we are getting users opting in for using python 3.7 by default on bionic
[20:02] <seb128> we should probably deal with that...
[20:03] <juliank> seb128: we should tell them don't do that
[20:03] <seb128> doko, vorlon, ^opinion about that?
[20:04] <juliank> seb128: We do not build most of the modules for 3.7, so there's no way switching to 3.7 works
[20:05] <juliank> Which  makes me wonder if those are botched 18.04->18.10 upgrades
[20:06] <juliank> or actually, botched discos
[20:06] <seb128> unsure
[20:06] <seb128> that bucket got flagged as a software-properties SRU regression and blocking that SRU
[20:07] <juliank> That should certainly be ignored
[20:07] <seb128> k, thx
[20:33] <vorlon> seb128: they get to keep both pieces if they do that, yes.  I don't think we have a good way to tell users not to clobber python3 on the path with something not from the python3 package.
[20:37] <sarnold> while we're on this topic, this bug sure looks like a user changing /usr/bin/python -- but the Dependencies.txt attachment doesn't show anything wrong for the python-minimal package https://bugs.launchpad.net/ubuntu/+source/python-django/+bug/1829857
[20:38] <sarnold> does dependencies.txt [/broken/path] annotations not catch symlinks going to the wrong plcae?
[20:43] <vorlon> sarnold: yeah we're probably unfortunately reliant on the dpkg md5sums, and symlinks have no md5sum so
[20:44] <vorlon> bdmurray: ^^ is that a limitation you're aware of?
[20:44] <sarnold> vorlon: dang. thanks for giving it a look
[20:59] <bdmurray> sarnold: what does "[/broken/path] annotations" mean?
[21:00] <sarnold> bdmurray: in https://launchpadlibrarian.net/424709223/Dependencies.txt the bit "initramfs-tools 0.131ubuntu19 [modified: usr/sbin/update-initramfs]
[21:01] <bdmurray> sarnold: I don't think so and that's why I added this https://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu/eoan/apport/ubuntu/view/head:/data/general-hooks/ubuntu.py#L526
[21:02] <sarnold> bdmurray: YES! :D Thanks!
[21:05] <bdmurray> sarnold: bug 1681528 - I guess it wasn't SRU'ed