[09:55] <Odd_Bloke> I'm seeing (Python) socket.getaddrinfo calls which are unsuccessful (i.e. "socket.gaierror: [Errno -2] Name or service not known") take 25 seconds during boot (in cloud-init), where successful ones are near-instantaneous.  I'm not really sure how to proceed in working out what's going on; does anyone have any ideas?
[09:55] <Odd_Bloke> (More details of my investigation in https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1629797)
[10:33] <rbasak> Odd_Bloke: any network interfaces configured at that point?
[10:34] <rbasak> Odd_Bloke: I wonder if the host is dropping DNS requests where on other hosts a "address unreachable" is returned immediately.
[10:35] <rbasak> In which case perhaps the default or configured nameservers are wrong.
[10:40] <Odd_Bloke> rbasak: So my assumption is that networking setup is somewhat sensible, because a successful query immediately after a failed one behaves as expected.
[10:40] <Odd_Bloke> rbasak: How can I check what's configured?  Is /etc/resolv.conf canonical?
[10:42] <rbasak> Odd_Bloke: it's only roughly canonical. It might point to 127.0.0.1 or something, stuff may not use it (Python is likely to use it though) and processes have to explicitly reload after it changes (unless that has changed in the last few years).
[10:43] <rbasak> Odd_Bloke: could you perhaps arrange for a tcpdump to be running in the background from an early boot command? Not sure how that could hook in before an interface is up though.
[10:48] <Odd_Bloke> OK, so it looks like 127.0.0.53 is not in resolv.conf when this resolution is happening: http://paste.ubuntu.com/23269725/
[10:48] <Odd_Bloke> (127.0.0.53 being systemd-resolved)
[10:52] <Odd_Bloke> And a `systemd-analyze plot` does show systemd-resolved coming up after cloud-init completes; let's see if I can change that.
[11:02] <rbasak> Is that the real root cause though?
[11:03] <rbasak> If it is 169.254.169.254 timing out, then why is that there?
[11:03] <Laney> @pilot in
[11:03]  * Laney quivers at the queue
[11:06] <Odd_Bloke> rbasak: 169.254.169.254 is the metadata service in EC2 and GCE; that's returned by the DHCP server on GCE where I'm testing.
[11:07] <Laney> @pilot out
[11:07] <Laney> will do that in a minute...
[11:07] <tsimonq2> argh I missed a pilot /o\
[11:07] <Odd_Bloke> Checking what I see in resolv.conf on xenial (where this problem is not exhibited).
[11:08] <tsimonq2> Laney: you still around? can you still sponsor something?
[11:08] <rbasak> Odd_Bloke: OK, but either GCE should be responding to *all* DNS queries to that address promptly, or cloud-init and/or cloud images should not be configuring the system to send *all* DNS queries there, or cloud-init should arrange not to send generic DNS queries that will timeout there and also make sure the system also does not do so.
[11:08] <Laney> tsimonq2: 03/10 12:07:18 <Laney> will do that in a minute...
[11:08] <tsimonq2> Laney: oh that's what you meant, ok sorry
[11:09] <rbasak> Odd_Bloke: relying on systemd-resolved sounds like a crutch to me.
[11:09] <Odd_Bloke> rbasak: So my hypothesis is that maybe on xenial DNS is configured system-wide to fail fast but on yakkety the assumption is that systemd-resolved will always be there, and systemd-resolved is configured to fail fast.
[11:09] <rbasak> Odd_Bloke: sure, I agree that sounds likely. I'm just saying that it doesn't sound like the root cause.
[11:11] <Odd_Bloke> So I see just 169.254.169.254 in resolv.conf on xenial, where we don't have this issue.
[11:17] <Odd_Bloke> I'm not able to easily check that changing the ordering fixes the issue, because having cloud-init After systemd-resolved causes a dependency cycle.
[11:19] <rbasak> I would treat systemd-resolved as a red herring, and focus on finding out what DNS queries are timing out, why they are timing out, why the system is configured to make the queries even though they will time out, and how to get them to not time out despite or regardless of systemd-resolved.
[11:21] <Odd_Bloke> Yeah, 127.0.0.53 is after 169.254.169.254 in /etc/resolv.conf, so I think I'm actually seeing fast failures from the same DNS server that seems to be giving slow ones during boot.
[11:32] <Laney> ok
[11:32] <Laney> tsimonq2: go go go
[11:32] <Laney> @pilot in
[11:39] <tsimonq2> Laney: https://bugs.launchpad.net/ubuntu/+source/hardinfo/+bug/1629601
[11:39] <tsimonq2> that
[11:39] <tsimonq2> the only FTBFS left in the lubuntu packageset
[11:39] <tsimonq2> I have to go, be back in a few hours o/
[11:44] <Odd_Bloke> rbasak: http://paste.ubuntu.com/23269859/ is the diff of nsswitch.conf between xenial and yakkety; it introduces "resolve" for hosts, which is provided by libnss-resolve: "nss module to resolve names via systemd-resolved".  Removing "resolve" fixes the boot time issue. \o/
[11:47] <Odd_Bloke> Hmm, there's a new systemd package with changes to do with nss-resolve; let's see if that fixes this.
[11:53] <Laney> tsimonq2: that looks weird, but maybe?
[11:53] <Laney> xnox: you had your fingerprints on that one ^, want to sponsor?
[11:54] <tsimonq2> Laney: Martin Pitt was gonna sponsor, but it seems he didn't get the time
[11:55] <tsimonq2> including him, there's a total of three acks
[11:55] <tsimonq2> it's just that nobody uploaded
[11:55] <Laney> :)
[11:57] <tsimonq2> if somebody could just upload already... ;)
[11:57] <Laney> just avoiding having to review it if someone else has :P
[11:57] <Laney> if I have to I'll look
[11:57] <tsimonq2> well it has been reviewed
[11:58] <tsimonq2> not my changelog entry though :P
[12:00] <tsimonq2> ok I really have to go
[12:00] <tsimonq2> o/
[12:09] <Odd_Bloke> pitti: https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1629797 is severely affecting boot time of all cloud instances; would you be able to take a look (at least to confirm that my root-causing is correct)?
[12:33] <rbasak> Odd_Bloke: I still have the same opinion. That's not the root cause. The root cause is the *reason* the DNS requests are timing out, not that switching to systemd-resolved exposes it.
[12:34] <rbasak> Ah
[12:35] <rbasak> You might be saying that DNS requests aren't actually going out, because systemd-resolved is unavailable?
[12:35] <Laney> jbicha: are you handling https://bugs.launchpad.net/ubuntu/+source/software-center/+bug/899878 ?
[12:35] <rbasak> In that case, yes, I guess that cloud-init needs to interact with nsswitch/systemd-resolved somehow to arrange for its queries to work.
[12:36] <jbicha> Laney: yes
[12:37] <Laney> jbicha: ok, thanks, will unsubscribe sponsors then
[12:37] <jbicha> ok
[12:37] <Laney> done
[12:47] <Odd_Bloke> rbasak: I think they're going out, but after the resolve service has spent a while waiting for resolved to appear (or something like that).
[12:54] <rbasak> Odd_Bloke: ironic given that one of the benefits of systemd-resolved is to consolidate timeouts!
[13:10] <Odd_Bloke> ^_^
[13:27] <sakrecoer> hi everyone, since the last member with upload rights left us a bit impromptu, ubuntu studio has been struggling with this FFe/UIFe https://bugs.launchpad.net/ubuntustudio/+bug/1624690
[13:28] <sakrecoer> now we finaly have all the pieces required, and i am wondering if anyone here could help us get this through before RC..
[13:31] <lamont> cjwatson: pitti: I am at a loss to understand how the open-iscsi test that is failing ever worked...
[13:32] <lamont> as best I can tell, it tries to install a deb that never gets built
[13:33] <cjwatson> Pass, I think that postdates my fiddling with it
[13:34] <lamont> wondering if you could throw the current xenial bits against an autopkgtest clone?  (as opposed to -proposed)
[13:42] <Odd_Bloke> cjwatson: ISTR that in the past when we've needed a new kernel module in builders we've had to ask for it (because we can't modprobe).  Am I remembering correctly?
[13:43] <cjwatson> Odd_Bloke: Correct, that's quite a complicated thing to do.  What's the requirement.
[13:43] <cjwatson> ?
[13:43] <Odd_Bloke> cjwatson: overlayfs.
[13:43] <cjwatson> We do it in lp:~canonical-sysadmins/canonical-is-charms/launchpad-buildd-image-modifier
[13:44] <cjwatson> I think.
[13:44] <cjwatson> Do you remember what the last one was?
[13:45] <cjwatson> Oh no, I remember, we do it in lp:launchpad-buildd.
[13:45] <cjwatson> Hackily.
[13:46] <cjwatson> Feel free to propose a merge against that (debian/launchpad-buildd.init) and we can discuss it.
[13:46] <Odd_Bloke> cjwatson: Will do; thanks!
[13:49] <LStranger> hello there
[13:53] <LStranger> I'm on concern of https://bugs.launchpad.net/ubuntu/+source/openbox/+bug/1336521 - how to solve problem for users of Trusty and Xenial? I understand, next release could receive a fix soon, but many users use LTS releases, and that bug is important usability bug.
[14:01] <rbasak> stokachu: I've been asked to help you out with some upcoming Juju-related uploads. What do you need, exactly?
[14:03] <stokachu> rbasak: nothing at this moment
[14:03] <stokachu> waiting to hear back from juju qa
[14:04] <rbasak> stokachu: ah, OK. I'll wait for a ping from you then - thanks.
[14:04] <stokachu> thanks, you should also have a email i cc'd you on
[14:05] <rbasak> Received, thanks. That was helpful in giving me the background but it wasn't clear to me if that implied that I was supposed to do something or not. Thank you for clarifying :)
[14:05] <stokachu> :) np
[14:16] <willcooke> doko_, hey.  Can you tell me if this is in good enough state to be reviewed... https://bugs.launchpad.net/ubuntu-terminal-app/+bug/1625074    If not, I will chase people to get whatever is needed done
[15:05] <stokachu> rbasak: referring to recent juju email, is it better to wait for yakkety (or newer) to contain the proper logic for handling 32bit packages in xenial? or since its a packaging issue we can upload yakkety as is and handle the logic in xenial seperately?
[15:14] <rbasak> stokachu: that's probably a question for the SRU team member who'll be accepting this into Xenial. AIUI, the usual "must be in the development release first" requirement is solely to prevent future user-visible regressions on release upgrades. I'm not sure I follow whether that rationale would be impacted in this case.
[15:14] <Laney> @pilot out
[15:19] <stokachu> rbasak: yea im not sure either, but if it was me I would want it in yakkety before pulling it into xenial
[15:20] <stokachu> slangasek: question for you when you're available
[15:20] <slangasek> stokachu: hullo
[15:21] <stokachu> slangasek: o/, so background: juju is dropping 32bit support in yakkety which they want to also drop in xenial
[15:21] <slangasek> stokachu: I don't care about having the juju 32-bit-package-removal package glue in yakkety, provided this gets done before yakkety release
[15:21] <stokachu> do i ask that they put that xenial logic in yakkety berore uploading?
[15:21] <slangasek> and provided you think the SRU process gives you adequate testing
[15:22] <stokachu> slangasek: for me I would feel better that they glue was in there before I do another upload to yakkety
[15:22] <stokachu> i'd like to have all the unknowns worked out without having to do multiple yakkety uploads
[15:22] <slangasek> stokachu: well, for yakkety you can just drop the 32-bit builds completely
[15:23] <slangasek> and it's appropriate to then have them added to the 'obsolete packages' list in update-manager so that do-release-upgrade will remove them for you on upgrade to yakkety, just in case
[15:24] <stokachu> ok
[15:26] <stokachu> so they've completed the dropping of 32bit packages, ill make sure it gets added to obsolte packages in update-manager
[15:26] <stokachu> then work with them to handle the logic in xenial
[16:35] <nacc> rbasak: i had an idea this morning for how to integrate MRs into the git workflow -- if you're around, I'd like to bounce them off of you
[16:59] <nacc> jgrimm: fyi, now that python-oslo.privsep's MIR has been approved (LP: #1616764), cinder, nova, nova-lxd should all migrate and drop off the ftbfs list
[16:59] <jgrimm> nacc, cool cool
[16:59] <nacc> coreycb: i assume you saw that python-taskflow grew a (new?) dep on python-pydotplus which is in universe?
[16:59] <nacc> coreycb: so it's stuck in -proposed
[17:00] <coreycb> nacc, <insert bad word>  I'll take a look, thanks :)
[17:00] <nacc> coreycb: thanks!
[17:01] <coreycb> nacc, I was going to take a pass through our stuck in proposed today anyway, so off I go to do that
[17:01] <nacc> coreycb: 2x thanks then :)
[17:01] <coreycb> :)
[18:34] <coreycb> xnox, have any tips for getting an s390 machine to test out autopkgtest changes?
[19:06] <nacc> coreycb: cpaelzer may be able to help, when he's back online, too
[19:06] <nacc> jgrimm: --^
[19:22] <coreycb> nacc, thanks that would be great
[20:07] <nacc> coreycb: let me look if it's documented anywhere
[20:10] <coreycb> nacc, thanks, and if not,  I can follow up tomorrow
[20:12] <nacc> powersj: --^ or maybe you know?
[20:22] <ngaio> On today's (and previous) daily images I'm running into an infinite loop with glxinfo such that no desktop shows on login. Which package should I report this against so it has a fighting chance of being fixed by release?
[20:38] <dobey> ngaio: final freeze is in two days, so unless you have a patch already, i wouldn't get hopes too high of it being fixed
[20:39] <dobey> ngaio: but probably should be filed against whatever package provides glxinfo, or your drivers.
[20:42] <xnox> coreycb, if you are a coredev go to https://bileto.ubuntu.com/ create a ppa, upload, and it will be built & tested on s390x.
[20:42] <xnox> coreycb, or open a bugreport with debdiffs/things to look into
[20:42] <slangasek> xnox: hmm, that seems like the wrong acl, nah?  should be open to all ubuntu-dev, not just core-dev :)
[20:43] <ngaio> dobey, running an Nvidia Quadro 1000M here (2011 era), and given it's from the live CD it must be using the free drivers
[20:43] <ngaio> dobey, so you suggest I file it against nouveau?
[20:47] <xnox> slangasek, enoclue. I think any can create the ppa, but a core-dev is needed to dput in any _source.changes.
[20:47] <dobey> ngaio: sounds reasonable, yes
[20:47] <xnox> slangasek, if building from "source packages" rather than branches.
[20:47] <xnox> but maybe that has been changed.
[20:47] <ngaio> dobey, thank you
[20:48] <dobey> xnox: doesn't need to be a coredev, but it's currently only coredevs and trainguards that can upload directly to silo PPAs
[20:48] <xnox> ack.
[20:49] <xnox> coreycb, i'm happy to sponsor things into a silo =)
[20:49] <dobey> and you need to be in some other group to create requests on bileto
[20:50] <robru> dobey: xnox: core dev can do anything
[20:52] <slangasek> xnox: ahh right, hmm I wonder if all of ubuntu-dev should be allowed there or not... probably so but we might have to check for assumptions
[20:53] <xnox> slangasek, not until launchpad ppa copies with new binaries end up in new queue.....
[20:53] <slangasek> xnox: right, that
[20:53] <dobey> robru: right; was just clarifying that it's not only core devs that can do what was requested.
[20:53] <robru> slangasek: also not until we get some kind of ACLs to that tickets aren't world-writable
[20:53] <slangasek> xnox: otoh new binaries == packaging diff; the acl for upload to ppa doesn't have to match the acl for publishing
[20:53] <slangasek> robru: I don't see a clear reason why we should trust ubuntu-core-dev with write access to tickets, but not ubuntu-dev?
[20:54] <robru> slangasek: because ubuntu-dev is an order of magnitude larger than ubuntu-core-dev? (isn't it?)
[20:54] <dobey> robru: but why does that matter?
[20:55] <slangasek> robru: it is, but the only trust boundary here is about which /particular/ packages they're trusted to upload to the Ubuntu archive
[20:55] <robru> dobey: because anybody who can access bileto can edit all tickets? the more people that is the more we have to trust all of them
[20:56] <dobey> robru: well you're already trusting them by running the software they uploaded to the archive, i guess? :)
[20:56] <robru> slangasek: so you're saying you're fine with all of ~ubuntu-dev having write access to all tickets? I've been worried about the security of this for awhile...
[20:56] <slangasek> robru: I'm saying I'm equally ok with ubuntu-dev and ubuntu-core-dev having access; I may be missing the nuances of why the write access is a problem, but if it's a problem for ubuntu-dev I think it's a problem also for ubuntu-core-dev :)
[20:57] <slangasek> because the average ubuntu-core-dev is no more in touch with bileto process than the average ubuntu-dev
[20:57] <dobey> robru: i mean, they can just straight skip ci train and upload to the archive without any problem
[20:57] <robru> slangasek: I guess I'm just not very familiar with what ubuntu-dev is but just in general the more people who have write access to bileto the more worried about it's insecurity I become.
[20:58] <dobey> it's too bad each silo ppa can't reasonably be owned by a different team
[20:58] <robru> dobey: yes that's a feature I've had on my mind for a while.
[20:58] <slangasek> coreycb: why does nova-lxd not have autopkgtests?
[20:59] <dobey> robru: can you not make tickets themselves only be writable by the creator of the ticket and any additional people in the irc nicks field?
[20:59] <dobey> robru: maybe make that field be launchpad usernames instead of irc nicks?
[21:00] <robru> dobey: no, because irc nicks can be trivially spoofed. I need to add a field in the db for team ownership of the ticket.
[21:01] <dobey> robru: sure, but i mean if they were validated as usernames against lp, and only those people could modify the ticket (and thus add more usernames), that seems like it would be a "decent enough" ACL implementation
[21:02] <slangasek> xnox: robru has reminded me that if you can put to bileto, you can make changes to the *upstream* source and push it to the archive without bileto saying boo; so yeah, the more restrictive acl seems necessary
[21:02] <dobey> robru: you could even steal the little ajax widget to search for people on launchpad :)
[21:03] <robru> dobey: I'd rather not change the semantics of the irc nicks field because that field is currently only used to ping people in the #ubuntu-ci-eng channel and changing those from irc nicks to lp names would break irc pings for a lot of people
[21:03] <dobey> robru: oh ok. well another field then, and auto-fill irc nicks from the LP info :)
[21:04] <jbicha> robru: LP has an IRC field?
[21:04] <robru> jbicha: yes, in your account you can list your IRC nicks on various networks, but it's just a string field and anybody can put anything there. bileto checks for freenode nicks of ticket creators and records that in the ticket, it's purely for notification purposes, as it is trivially spoofed
[21:05] <robru> jbicha: https://launchpad.net/~jbicha yours is listed for irc.ubuntu.com
[21:05] <jbicha> right
[21:07] <dobey> robru: i guess you'll have to port queuebot to telegram, and send message to people there. irc is old and busted
[21:07] <robru> dobey: sigh, yes.
[21:09] <nacc> caribou: can you give me some context for the dependency of clamav on llvm-3.6 (as opposed to 3.8 (which i think is what llvm-dev is currently pointing to) or 3.9)?
[21:16] <nacc> caribou: i'm asking because llvm-toolchain-3.6 ftbfs in y (testcase failures), and the only reason its in main is clamav
[22:11] <cjwatson> dobey,robru: Now that landing PPAs are ephemeral, I think you could probably reasonably use Archive.newComponentUploader or something to grant upload access to the landers of each individual ticket.
[22:11] <cjwatson> (Though only if you knew the landers' Launchpad usernames.)
[22:12] <robru> cjwatson: would that work for teams?
[22:12] <cjwatson> Sure
[22:13] <cjwatson> You'd probably still want the option to have more than one person-or-team though
[22:13] <robru> cjwatson: so newComponentUploader grants upload rights to a person or team on just one archive without adding that team as a member of the ppa-owning team?
[22:15] <robru> cjwatson: i wasn't aware it was possible to have upload rights to a ppa without being a member of the owning team
[22:18] <cjwatson> robru: Right, it's an obscure API-only feature but it works fine.
[22:18] <robru> cjwatson: thanks, I'll note that down so i don't forget
[22:19] <cjwatson> The component will have to be 'main', of course.  (Or equivalently you can use newPocketUploader with pocket='Release'.)
[22:19] <cjwatson> Since it's obscure and API-only it hasn't been polished much, hence the slightly weird options :)
[22:20] <cjwatson> I never suggested it for non-ephemeral landing PPAs because it doesn't show up in the web UI so added uploaders would be liable to be forgotten about.
[22:26] <slangasek> wow :)
[22:27] <robru> Ah
[22:28] <tsimonq2> ooh new Kubuntu dev :D
[22:50] <bdmurray> xnox: Have you had a chance to retest bug 1620525?
[22:52] <xnox> i switched to unionfs =)
[22:52] <xnox> let's see.
[22:54] <xnox> bdmurray, i think it works now on xenial.
[22:54] <xnox> will retest with a yakkety kernel on yakkety too.
[22:54] <bdmurray> xnox: thanks
[22:58] <LStranger> Oh, some people woke up so may be I can get some answer. :) I'm on concern of https://bugs.launchpad.net/ubuntu/+source/openbox/+bug/1336521 - how to solve problem for users of Trusty and Xenial? I understand, next release could receive a fix soon, but many users use LTS releases, and that bug is important usability bug.
[23:00] <LStranger> Maintainer in Debian promiced to push update into sid and stable soon but as for Ubuntu I have no idea whom to ask about LTS updates.
[23:01] <bdmurray> Is there a related debian bug report?
[23:02] <sarnold> LStranger: best would be to take a fix from the debian developer, prepare a debdiff for the ubuntu packages, build them locally, make sure it works, then post the debdiff to the bug and ask a sponsor to build it for everyone
[23:02] <LStranger> Yes, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=838326 is it.
[23:04] <xnox> bdmurray, yakkety is bad
[23:04] <xnox> wait i might be wrong
[23:06] <LStranger> In fact, I rebuilt the package for Trusty and use it, so I'm positive it works, and it should so for Xenial, not checked it yet but relevant place in Openbox haven't changed. I'll check it, sure, tomorrow I think.
[23:07] <bdmurray> LStranger: you might have a look at the following wiki page https://wiki.ubuntu.com/StableReleaseUpdates#Procedure.  Specifically, a test case would be helpful.
[23:08] <LStranger> bdmurray: oh, thank you, will do.
[23:17] <xnox> bdmurray, all is good on yakkety too.
[23:39] <coreycb> slangasek, I agree nova-lxd needs autopkgtests.  we're planning to put some focus on improving autopkgtests next for all openstack core packages next cycle.
[23:40] <slangasek> coreycb: ok, great :)
[23:40] <coreycb> xnox, awesome thanks I'll try that out tomorrow