[05:35] <cpaelzer> ahasenack: only saw this now, already answered on #server
[08:02] <cpaelzer> slangasek: the bump to bridge-utils 1.5.9ubuntu2 to 1.5.15ubuntu1 https://launchpad.net/ubuntu/+source/bridge-utils
[08:02] <cpaelzer> slangasek: made it having Conflicts: ifupdown (<< 0.8.17)
[08:02] <cpaelzer> but that is https://launchpad.net/ubuntu/+source/ifupdown not available in Ubuntu
[08:03] <cpaelzer> just relaized that this on a dist-upgrade made me having
[08:03] <cpaelzer> The following packages will be REMOVED:
[08:03] <cpaelzer>    ifenslave ifupdown
[08:03] <cpaelzer> so it effectively became mutually exclusive until ifupdown is updated as well
[08:03] <cpaelzer> slangasek: Intentional (I don't think so) or is a new ifupdown already being made?
[08:08] <cpaelzer> xnox: ^^ as your TZ is much closer, if you know some team-plans on this let me know
[12:06] <rbasak> bdmurray: could you glance at bug 1623125 please/ What are your SRU verification expectations there for Artful?
[12:21] <ahasenack> Laney: hi, do you have the secret sauce somewhere that preps ppc64 images for autopkgtest? I'm hitting https://bugs.launchpad.net/ubuntu/+source/autopkgtest/+bug/1630909
[12:22] <ahasenack> or it "just works" with the ppc cloud images we have?
[12:28] <Laney> ahasenack: I don't know that anyone has ever tried it with qemu like that
[12:28] <Laney> We boot a cloud image and then do things to it over SSH
[12:28] <ahasenack> Laney: how does it get console access in the cloud case? ssh?
[12:28] <ahasenack> ah
[12:30] <xnox> ahasenack, to be honest, i believe locally we should stop using -- qemu provider too; and instead use ssh setup as well.
[12:30] <xnox> ahasenack, and like use $ multipass launch bionic; and use multipass shell into it.
[12:30] <Laney> that'd be good, SSH setup scripts all over
[12:31] <xnox> or lxd, but again with ssh.
[12:31] <xnox> or uvt-launch thing, again with ssh.
[12:31] <ahasenack> Laney: xnox: I'll try ssh
[12:32] <ahasenack> that might be better also because it's how migrations use it, and the bug I'm trying to reproduce is happening only there so far
[12:40]  * rbasak looks forward to xnox writing an openssh dep8 test :-)
[12:41] <cjwatson> there is one already?
[12:41] <xnox> http://autopkgtest.ubuntu.com/packages/openssh ?
[12:41] <rbasak> I mean in terms of using ssh to get to autopkgtest hosts as above.
[12:41] <rbasak> Will that not interfere?
[12:41] <xnox> you know you can start a second server....
[12:42] <xnox> plus we have cloud image testing framework which tests that "ssh comes up" and "is available on default ports" with the "right keys" every day, on all releases =)
[12:42] <cjwatson> openssh's regression tests start a server on a high port
[12:42] <rbasak> My point is that you have to special case handling for that in the test because of the test environment, which is ugly, and probably the reason pitti did it via qemu where possible in the first place.
[12:42] <xnox> that was not the reason at all.
[12:43] <cjwatson> they won't care about sshd running on the host
[12:43] <rbasak> cjwatson: a openssh dep8 test might reasonably want to check that installing openssh-server results in a listening daemon on the default port.
[12:43] <xnox> at first tests were in schroot; then in lxd/lxc containers; then qemu - to break testbed, which became inflexible and unreliable; then started to use the cloud - because that gives one a "machine" more reliably.
[12:44] <cjwatson> rbasak: I think you need a better example, because as the openssh maintainer I'm not going to do that precisely because it would be bloody awkward :)
[12:45] <rbasak> Sorry. I didn't mean to suggest that xnox's suggestion was necessarily bad. Just that it results in some reasonable-to-write tests being awkward to write!
[12:46] <xnox> rbasak, it doesn't, and at the end of the day the cloud test bed is a lot better than the qemu provider one.
[12:46] <rbasak> It really depends on what you're trying to test.
[12:47] <cjwatson> that sort of thing would be awkward for other reasons - it's often useful to be able to debug autopkgtests in ad-hoc environments, where in practice sshd is very often running
[12:47] <xnox> rbasak, and specifically, src:systemd tests used to pass in qemu runner; but not over ssh; due to logind killing and closing ssh connections prematurely due to to systemd regression =/
[12:47] <cjwatson> so this is already a thing that in practice one wants to avoid
[12:47] <xnox> that was not at all fun to chase down.
[12:47] <rbasak> That's the sort of thing I mean.
[12:47] <rbasak> In those cases, the qemu running is clearly better.
[12:47] <rbasak> runner
[12:47] <xnox> that's not what we can use in production
[12:48] <cjwatson> if I were writing a "does installing openssh-server start the server usefully" test then I'd nobble the port.
[12:48] <cjwatson> it's just not pragmatic otherwise.
[12:48] <rbasak> The more infrastructure there is in the guest for the sake of the test execution environment, the less we can effectively test the low level stuff.
[12:49] <rbasak> I'm just saying there's a downside to using cloud images for autopkgtest.
[12:49] <rbasak> Not that we shouldn't do it that way in general.
[12:49] <cjwatson> I think in practice this is already a downside that tests have to assume is a possibility.
[12:49] <rbasak> I'm not sure if it's worth the effort, but a dep8 Restriction to say "really I need qemu" might be reasonable.
[12:50] <rbasak> (well, not "no qemu" specifically, but "as pure an environment as possible")
[12:50] <rbasak> (well, not "qemu" specifically, but "as pure an environment as possible")
[12:50] <xnox> we used to run tests on phones and tablets, bare metal, with provisioning over ADT
[12:51] <xnox> i beleive there is a MAAS runner too (or maybe I am imagining things)
[12:51] <xnox> in practice, we do not have infrastructure to run tests on smaller guest assist than we currently do.
[12:53] <xnox> it's not just the console access; we also need ability to transfer files in and out of an instance, to collect artefacts and push files onto it.
[12:54] <xnox> using serial tty + an assistance VM to mount the disk on, could be a cloud solution, but that would increase our quota usage per test.
[12:54] <xnox> rbasak, there have been suggestions to use lxd containers runner for !isolation-machine tests, because it's faster and most things do not need full thing (e.g. most of ruby/php/perl/python tests)
[12:55] <xnox> with lxd runner, you can have a lot less things running inside the guest, as one has direct filesystem access & out of bound shell
[13:01] <rbasak> Makes sense
[13:02] <rbasak> Assuming it works.
[13:02] <rbasak> The s390x environment change had quite a bit of fallout :-/
[13:03] <rbasak> Do we have any policy / extra thoughts on conffile changes in SRUs?
[13:04] <rbasak> bug 1661869 seems reasonable to me. The point of the SRU would be to change conffiles though.
[13:13] <xnox> rbasak, well, only in status. It used to be "SKIP" which britney interpreted as "PASS", and britney got all confused when "SKIP" -> "FAIL"
[13:13] <xnox> an oversight in britney to not account for "SKIP" state
[13:31] <mdeslaur> slangasek: mind if I merge python-django?
[13:34] <rbasak> sil2100: I'm looking at bug 1466926. I had some concerns and then read your comment that confirmed my thoughts. I'd appreciate your opinion given the responses to your comment.
[13:35] <rbasak> It seems to me that we should accept the SRU, but we need to take extra care. Do you agree, and any thoughts on what we could do to mitigate please?
[13:35] <rbasak> cpaelzer: FYI ^
[13:37] <rbasak> Any opinion on a wholesale update to 2.4.29 for example, if everything in http://www.apache.org/dist/httpd/CHANGES_2.4 sounds acceptable to SRU?
[13:38] <rbasak> Would that reduce or increase the risk do you think?
[13:38]  * rbasak goes to get lunch
[13:54] <cpaelzer> rbasak: the feedback I read ont he bug so far seemed to answer the question sil2100 had - => yes it is important
[13:54] <cpaelzer> I haven't looked at all at doing a while minor release update
[13:55] <cpaelzer> I thought SRU-wise unless under MRE we prefer backporting the actual fix
[13:55] <cpaelzer> 2.4.18->2.4.29 sounds a lot (too much IMHO)
[13:58] <rbasak> cpaelzer: we don't have strictly have MREs any more, though we do use the same quality criteria as before. If the changes are all acceptable for an SRU and we are confident about the quality then we can do it.
[13:58] <rbasak> cpaelzer: I ask because for a more complex change, if there are interactions with the rest of the codebase we don't understand well, then we might be taking less risk taking all upstream changes, IYSWIM.
[13:58] <rbasak> cpaelzer: depends on how deeply we understand this diff.
[13:59] <rbasak> But yeah it's a trade-off against regressions caused by other changes.
[13:59] <rbasak> I wondered where how you ad sil2100 judge this.
[13:59] <rbasak> I wondered how you and sil2100 judge this.
[14:04] <rbasak> juliank: based on https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1750465/comments/10, are you suggesting I should hold on processing slangasek's SRU upload?
[14:05] <juliank> rbasak: I'm not sure what he did precisely, I only saw the bug messages.
[14:06] <rbasak> juliank: http://launchpadlibrarian.net/361447927/plymouth_0.9.2-3ubuntu17_0.9.2-3ubuntu18.diff.gz is his artful SRU upload.
[14:06] <juliank> That should be correct anyway.
[14:06] <juliank> And it probably fixes the bug even if we don't know the exact reason yet...
[14:07] <rbasak> OK. If you and slangasek both agree on what we should land, I'm happy :)
[14:07] <rbasak> I'll accept assuming all the other SRU bits look OK.
[14:09] <rbasak> Thank you for looking
[14:17] <sil2100> rbasak: well, I didn't judge anything, I mean this SRU just felt a bit risky considering the merits
[14:18] <sil2100> rbasak: to me this SRU was still undecided
[14:18] <sil2100> rbasak: but when I looked at it back then I was leaning more into the reject territory - although I didn't see the later comments yet
[14:20] <rbasak> sil2100: sure, I got that part. By "judge" I'm asking for your opinion now :)
[14:22] <mdeslaur> slangasek: too late
[14:30] <ahasenack> Laney: hi, I pushed another gvfs build to https://launchpad.net/~ahasenack/+archive/ubuntu/gvfs-test-fixes-1713098/+packages with the FORCE flag set, as suggested in the upstream bug
[14:33] <Laney> ahasenack: ok, is it published?
[14:34] <ahasenack> Laney: yes
[14:34] <Laney> cool
[14:34] <ahasenack> Laney: gvfs - 1.36.0-1ubuntu2~ppa1
[14:34] <Laney> you want 10 test runs on ppc64el or something?
[14:34] <ahasenack> that would be fine
[14:35] <Laney> going
[14:57] <slangasek> cpaelzer: sorry, I didn't notice the added conflicts.  I had not planned to merge newer ifupdown, but if there's a conflicts then we should probably look at doing so
[14:58] <slangasek> mdeslaur: no objections to merging python-django, did you find one that builds better than the one already in -proposed?
[14:58] <mdeslaur> slangasek: It did build, I think it was a python change that got reverted
[15:04] <rbasak> xnox: why is it worth making all dovecot users download new binaries just to fix the dep8 test?
[15:05] <Laney> ahasenack: https://paste.ubuntu.com/p/kNcRc4Dd3c/ looks like that worked, just one trash failure
[15:06] <ahasenack> Laney: we can't be sure, we never saw the failure from ppa tests, did we?
[15:07] <ahasenack> at least it didn't introduce a new error
[15:07] <Laney> there's no reason it would be any different
[15:07] <Laney> to the archive
[15:08] <ahasenack> it's the same build farm?
[15:09] <Laney> yes, it is an almost identical codepath except there's some extra stuff to set up the PPA
[15:09] <ahasenack> Laney: how about this control ppa then, it's gvfs unchanged: https://launchpad.net/~ahasenack/+archive/ubuntu/unchanged-gvfs
[15:09] <cpaelzer> ahasenack: that is probably the easiest one line way to get what you asked for slowing down the guest
[15:09] <cpaelzer> ahasenack: sudo cpulimit -l 50 -p $(ps aux | grep $(virsh dominfo b-test | awk '/^UUID:/ {print $2}') | grep -v grep | cut -f2 -d" ")
[15:09] <cpaelzer> b-test being the guest name
[15:09] <cpaelzer> ahasenack: I don't expect you want to slow I/O do you?
[15:09] <Laney> that PPA has a superseded version
[15:10] <Laney> we have enough results for gvfs anyway no?
[15:10] <cpaelzer> there is also libvirty way - but you run qemu directly right?
[15:10] <cpaelzer> you can take out the inner bulk with the pid of any qemu you run
[15:10] <ahasenack> Laney: yeah, it's the one from before your version bump. I can bump it to match the archive (minus ~ppaN)
[15:10] <ahasenack> cpaelzer: I'm using libvirt now (uvt-kvm)
[15:10] <cpaelzer> ah I see
[15:10] <cpaelzer> then just a sec
[15:10] <ahasenack> cpaelzer: didn't see anything about limits in virt-manager, my gui :)
[15:11] <ahasenack> Laney: as we are still experimenting, it would be super if we could see the failure from a ppa build, and then the lack of failures from a ppa build (which we do already)
[15:12] <sil2100> rbasak: ah! Then I misunderstood ;) I'll read up the rest and comment
[15:12] <Laney> ok, if you want
[15:12] <Laney> just trying to save some CO2 from the atmosphere
[15:12] <cpaelzer> ahasenack: read "blkiotune" for disk and "CPU Tuning" for cpu in https://libvirt.org/formatdomain.html
[15:13] <ahasenack> can you trigger it from that ppa, or the fact that it's older prevents that?
[15:13] <ahasenack> cpaelzer: ok, thx
[15:13] <ahasenack> the dep8 error was happening with that version as well
[15:14] <cpaelzer> ahasenack: or just use my cmdline above
[15:14] <cpaelzer> this is one of the cases (debuging and qucik and dirty) where it beats the complex lbivirt config
[15:14] <Laney> ahasenack: it needs to be newer
[15:14] <ahasenack> Laney: ok, I'll fix that
[15:15] <Laney> we could just run against the archive's gvfs
[15:15] <Laney> this is a bit of a waste of compute imo
[15:17] <slangasek> juliank: thanks for the follow-up comment on the apt+plymouth trigger mess... why do you say that base-files must be fully configured before bash is unpacked?  I don't see that in the dependencies (bash Depends: base-files, it doesn't Pre-Depends:), and I didn't see any logs that showed this
[15:17] <ahasenack> Laney: ok, would you be ok with uploading with that force patch then?
[15:17] <Laney> ahasenack: to the archive?
[15:18] <ahasenack> yeah
[15:18] <Laney> I think you should comment upstream
[15:18] <Laney> I thought it was only meant to be an experiment
[15:18] <Laney> but if he decides to commit that change then we can
[15:18] <juliank> slangasek: I don't say that, I say that apt configured base-files as can be seen in the log. But it probably put it into triggers-awaited state instead of installed, and plymouth-... into triggers-pending, since we configure with --no-triggers
[15:19] <slangasek> juliank: ok, I didn't see that in the logs I skimmed, sorry for overlooking
[15:19] <ahasenack> Laney: ok
[15:19] <slangasek> juliank: but yeah, triggers-awaited or something makes sense
[15:19] <ximion> doko: thank you for uploading LDC 1.8 to -proposed! I didn't dare to file a FFe for it, but having it is very beneficial
[15:19] <ximion> Please tell me if there is anything I can help with (the transition in Debian so far is flawless, only libundead needed a minor change)
[15:32] <jbicha> ximion: maybe you need to file a binnmu bug since ldc migrated to Testing without the rebuilds happening https://release.debian.org/transitions/html/auto-ldc.html
[15:33] <nacc> slangasek: for seeding ruby, if i seed ruby-full (which depends on ruby and ruby-dev), will it be sufficient? That seems to be the upstream ruby recommendation on Ubuntu, and then we'd "simply" need to promote ruby-defaults again?
[15:35] <ximion> jbicha: some packages got binNMUs... I am still not sure when I should ping the release team explicitly for a transition, sometimes they just do it automatically, and sometimes I need to file a bug
[15:35] <ximion> and explicit bug is nicer in this case, I guess (only 4 more packages need to be rebuilt though)
[15:36] <slangasek> nacc: you could also just seed ruby, which seems more correct semantically; and ruby-dev will be pulled in automatically by virtue of our rule that promotes associated -dev packages
[15:36] <slangasek> nacc: it depends on whether ruby-full is something you want to recommend users install
[15:36] <nacc> slangasek: i'm being asked specifically that ruby-full (which upstream docs) be in main (by rbasak, dpb)
[15:36] <nacc> slangasek: right
[15:36] <nacc> slangasek: i would prefer 'ruby' as well, as it makes more sense to me
[15:36] <jbicha> ximion: the only rebuilds that happened were because new uploads were done in Debian
[15:37] <jbicha> ximion: in this case a bug would help since it appears that the new "shared" naming allowed ldc to migrate to Testing when it normally wouldn't
[15:37] <ximion> jbicha: agreed, I'll file one later today
[15:38] <nacc> dpb1: do you want it under "Other" or should I make a new "Ruby" section? We have no other interpeters in supported-misc-servers or supported*server (which is a metaseed currently)
[15:38] <dpb1> nacc: +1 on ruby
[15:38] <ximion> although I guess what allowed LDC to migrate was britney's ignore-cruft setting - the LDC upload didn't make anything uninstallable with the old cruft still in the archive, so it migrated
[15:39] <dpb1> nacc: I don't think it matters on what section
[15:39] <nacc> dpb1: given they are all from the same source package, i don't think anyone is really going to notice which binary is actually in main, tbh
[15:39] <dpb1> ok
[15:40] <dpb1> the binary will "pull in" the source as supported, I take it
[15:40] <nacc> dpb1: the source will need to be in main for any of its binaries to be in main
[15:41] <dpb1> k
[15:41] <slangasek> the source is always pulled in via the binary.  so it's a question of which binary packages you want to represent as supported
[15:41] <nacc> slangasek: right
[15:41] <dpb1> I think ruby makes the most sense, no argument
[15:42] <nacc> dpb1: given that we might add more to this list, I'll add a section for Language Interpeters, ack?
[15:42] <dpb1> nacc: sounds good
[15:44] <nacc> dpb1: seed pushed
[16:12] <xnox> rbasak, because it is a regression versus release pocket, that a security upload introduced; which trips up all other pending-srus which are reverse-deps of dovecot; there is no better way then SRUing this, to insure that next upload (security or updates) will fix this issue
[16:12] <xnox> rbasak, it's not like there is a VCS for next-artful upload of dovecot, which ever that might be.
[16:19] <nacc> slangasek: urgh, LP: #1757344, segfault in 'frontend' -- that's debconf?
[16:21] <slangasek> nacc: it's debconf-ish.  the segfaulting frontend may not be debconf code itself
[16:21] <slangasek> (since debconf is pretty vanilla perl)
[16:22] <nacc> slangasek: yeah, that's what i thought
[16:22] <nacc> slangasek: i guess it could be the normal update-manager frontend?
[16:22] <nacc> slangasek: but, in this case, it's (presuambly) not a bug in php, but in that frontend?
[16:22] <slangasek> nacc: update-manager's frontend is the debconf gnome frontend.  But yeah, a frontend crash is unlikely to be php's fault
[16:23] <nacc> slangasek: alright, i'll add a task once i figure out what they were using
[16:36] <rbasak> xnox: VCS for next-artful upload> as an aside, I think we should have that kind of thing
[16:36] <rbasak> xnox: but I'm not happy impacting users for what is fundamentally a developer issue that doesn't impact users.
[16:41] <xnox> rbasak, e.g. even if we commit this to "-proposed" next time there is a CVE -security will not take it, they take -updates only, no?
[16:42] <xnox> rbasak, i think the right thing to do, is to upload this sru.
[16:42] <xnox> on the grand scheme of things it is miniscual to the kernel every three weeks.
[16:43] <xnox> rbasak, and it's been less than two weeks
[16:43] <xnox> rbasak, and it does show security team did not run autopkgtest of dovecot itself, before uploading security upload.
[16:45] <rbasak> I believe the security team generally look at -proposed and make a decision
[16:45] <rbasak> Sometimes I've seen them incorporate a verification-done upload to -proposed that hasn't finished aging, for example
[17:11] <bdmurray> andyrock: Are you still about?
[17:14] <andyrock> bdmurray: hey hey
[17:14] <andyrock> what's up?
[17:14] <bdmurray> andyrock: hey, I was you software-properties change got uploaded. One idea I might not have explained clearly was using distro-info so we don't need this line.
[17:14] <bdmurray> channel='edge', # Remove this once bionic is officialy supported.
[17:15] <bdmurray> Although what does "officially supported" mean? 18.04.1 or 18.04 or some other date?
[17:15] <andyrock> officially supported by "livepatch"
[17:15] <bdmurray> Okay so the last one!
[17:16] <andyrock> yeah livepatch devs told me that before release bionic should be supported in stable canonical-livepatch snap
[17:17] <andyrock> we can upload a small fix later
[17:17] <andyrock> should be a small one
[17:17] <andyrock> at least we can test it right now
[17:17] <bdmurray> Right, my hope was to avoid that as an SRU but it sounds like it may not be possible.
[17:18] <andyrock> yeah the problem is that it's not possible to query canonical-livepatch and ask "is this release supported?"
[17:18] <andyrock> we need to address this in the future
[17:19] <bdmurray> Okay, that's fine - just wanted to make sure it was considered.
[21:16] <mwhudson> xnox, cjwatson, cyphermox: are any of you awake and willing to talk to me about partition alignment requirements?
[21:16] <mwhudson> (for subiquity reasons)