[05:00] <pitti> Good morning
[05:02] <pitti> hallyn: "systemctl status" says "bad", looks like this unit wasn't actually installed, or has some error? Alias= are translated into symlinks at install time
[05:03] <pitti> stgraber: the libnl3 package caused many network-manager failures, so it got pulled after a quick discussion in #u-r; Adam later replaced trusty-updates with a reversion
[05:03] <pitti> stgraber: right, that wouldn't have helped users who already upgraded, only those who didn't yet
[05:09] <infinity> pitti: Should never delete without a revert.  The revert allows those who upgraded but didn't reboot to have a higher chance of being fixed before things go south.
[05:09] <infinity> pitti: Anyhow, thanks to stgraber's script freaking out, I went on the warpath to fix it all.
[05:10] <pitti> infinity: thanks! understood for next time (I ran out of time on Fri); i. e. I'll just leave it to some US folks right away next time
[05:11] <infinity> pitti: Yeah, acting quickly was good, but if you couldn't revert, nick highlighting (or given the severity, phoning) someone who can DTRT would help.
[05:11] <infinity> pitti: I suspect a whole lot of grandparents with broken networks phoned their grandkids over the weekend.
[05:11] <infinity> pitti: But hopefully it's mostly settled out by now. :/
[05:37] <sarnold> pitti: sorry to bother you directly, could you take a loot at my systemd timers for my archive sync? http://paste.ubuntu.com/16471760/
[05:38] <sarnold> pitti: the goal is to run the rsync, and once that's finished, run the zfs snapshots, then wait four hours before repeating
[05:38] <infinity> sarnold: Is it coincidence or intentional that 4h is the delay we use to trigger non-Canonical mirrors?
[05:39] <infinity> sarnold: (The followup question being "why don't you ask IS to trigger you?")
[05:39] <sarnold> infinity: I had the impression from IS that they only wanted to push-sync Big Mirrors and wanted everyone else to periodically pull
[05:39] <infinity> Perhaps so, yes.
[05:40] <sarnold> infinity: the 4hours was slightly informed by the "six times a day" mention I saw somewhere.. mostly I wanted lots of snapshots to work with in a hurry, and considered switching to five or six hours between syncs later on..
[05:41] <infinity> sarnold: Well, 4h is a good cadence if you're pulling from a tier2 mirror, since that's when we trigger them. :)
[05:41] <infinity> If you're pulling from archive.ubuntu.com, all bets are off, they update constantly.
[05:42] <sarnold> pulling from archive.ubuntu.com, it's remarkably speedy over rsync :)
[05:42] <infinity> Anyhow, I have no opinions on your units.  I still live in a mindset where replacing cron was silly.
[05:42] <sarnold> I even found the 'fast' IP address and did terrible terrible things with /etc/hosts that I'm not proud of.
[05:42] <infinity> Why mangle hosts?
[05:43] <sarnold> because one IP was five to ten times fsater than the others for me
[05:43] <infinity> rsync rsync://ip.add.re.ss::foo?
[05:43] <infinity> I mean, why bother pinning down the hostname instead of just using the IP.
[05:43] <infinity> rsync doesn't vhost.
[05:43] <sarnold> hmm. good point. :)
[05:43] <sarnold> thanks
[05:44] <cpaelzer> good morning
[05:44] <sarnold> re: replacing cron, believe me, I'm half tempted to return this to cron. but getting the one to run after the other finished, under two different user accounts, seemed like a worthwhile enough feature to try out
[05:44] <sarnold> morning cpaelzer :)
[05:44] <infinity> sarnold: Alternately, it might be that what you're looking for is "us.archive.ubuntu.com" :P
[05:45] <infinity> For reasons I've never fully understood, archive.u.c is a round-robin of gb and us.
[05:45] <infinity> So its usefulness fluctuates based on time of day and phase of moon.
[05:46] <sarnold> infinity: even within the us hosts one was way faster than the other
[05:46] <infinity> sarnold: Curious.
[05:46] <infinity> sarnold: IS might want to know about that.  Pretty sure they sit on the same VLAN.  Probably the same switch on the same rack even.  If one sucks, that's odd.
[05:46] <sarnold> especially since I saw roughly no difference between mtr results between the two. the fast one was 85 ms away, the slow one 80 ms away, and stddev was <5 ms on both
[05:47] <sarnold> infinity: they are actually on different routers and different uplinks
[05:47] <pitti> sarnold: the main issue is that you don't want to use a .target for those
[05:48] <pitti> sarnold: as the .target is never stopped (there is no such thing as an "oneshot target", this doesn't make sense)
[05:48] <pitti> sarnold: so once it's running, the timer won't do anything any more
[05:48] <sarnold> pitti: ah :)
[05:48] <infinity> sarnold: Hrm.  Not from me, they aren't.
[05:48] <sarnold> infinity: hmm.
[05:48] <pitti> sarnold: so, drop the .target, have the .timer trigger the rsync unit, and have that Requires/Before= the snapshot unit
[05:49] <pitti> sarnold: and as both are oneshot, they will be restarted on the next timer
[05:49] <pitti> sarnold: also, you don't need to add Before= to one and After= to the other, once is enough
[05:49] <sarnold> infinity: 91.189.91.23 was the fastest -- 91.189.91.26 was the slowest.
[05:50] <sarnold> pitti: excellent, good to know, I thought it seemed inelegant for Before/After to be anything except duals of each other, but that forum post was so sure.. (I should have known. :)
[05:51] <infinity> sarnold: Very weird indeed.  I have identical routes to both.  Under severe load, economy would probably be faster due to being a slightly larger machine, but I'm not sure they have enough outbound bandwidth to actually cripple them.
[05:51]  * infinity shrugs.
[05:52] <sarnold> infinity: .23 would routinely give me 4-8 MBps, .26 would routinely give me 120KBps.
[05:52] <infinity> sarnold: Ouch.
[05:52] <infinity> sarnold: If that's reproducible, I suspect IS would definitely like to know.
[05:53] <sarnold> ..especially since I wanted to see ~50MBps from these things :) heh
[05:53]  * pitti needs to disappear for a few hours for a long appointment
[05:53] <sarnold> pitti: thanks for the help
[05:54] <infinity> pitti: Have fun storming the castle.
[06:25] <ginggs> morning! freecad doesn't want to migrate because freecad-doc (arch:all) was dropped. would someone decruft please?
[07:03] <infinity> ginggs: Done.
[07:04] <ginggs> infinity: thanks!
[08:53] <sil2100> cjwatson: hey! We had the 503 again during two image builds - is there a way to re-try those builds somehow?
[08:53] <sil2100> cjwatson: since essentially the livefs builds succeeded, would be a waste of time to rebuild those from scratch
[08:54] <sil2100> cjwatson: also, it's a bit worrying that it happens so frequently recently, in the past I don't remember it happening even once
[09:13] <cjwatson> sil2100: as cdimage@nusakan, run the usual build command (you can get it from cdimage/etc/crontab) but without --live
[09:13] <sil2100> Oh, ok
[09:13] <sil2100> Good to know, thanks!
[10:17] <brendand> why would it be the case that lxdbr0 isn't visible in ifconfig, after i've run lxd init?
[10:17] <brendand> i have of course specified to set up networking
[10:27] <juliank> infinity: Coverage report for APT https://apt.alioth.debian.org/coverage/index.html :D
[10:31] <pitti> brendand: what does "systemctl status lxd-bridge.service" say?
[10:32] <pitti> brendand: note that both lxd.service itself and also the bridge are only started on demand (via socket activation)
[10:32] <pitti> brendand: i. e. you won't actually see the bridge up until you try to start your first container
[10:36] <brendand> pitti, hi btw!
[10:36] <pitti> hey brendand, how are you?
[10:37] <brendand> pitti, i'm good. you know i'm back on the inside, right?
[10:37] <pitti> brendand: yeah, I read it on the ML; welcome back!
[10:37] <brendand> ok, didn't realise it only starts with the containers
[10:37] <pitti> brendand: you didn't hold up on the outside for very long :)
[10:38] <brendand> pitti, yep joining the same club as such luminaries as xnox and mvo :)
[10:38]  * pitti waves to lamont to :)
[10:38] <brendand> pitti, no, unfortunately it didn't last long at all - or fortunately maybe!
[10:38] <brendand> pitti, oh lamont too, heh - he's on my team
[10:39] <Laney> the outside world is that bad eh?
[10:40] <brendand> Laney, yeah it's a dark and scary place
[10:42] <brendand> pitti, so it looks like: http://paste.ubuntu.com/16473414/
[10:43] <brendand> green is good
[10:43] <brendand> but 'failed to setup lxd-bridge' seems bad
[10:44] <pitti> right, so lxd-bridge.start should not succeed in this case
[10:48] <brendand> i feel like i did something very bad to my lxd. even launch doesn't work
[11:12] <farblue> hi all :) Maybe not the right channel to ask but I'm working with fan networking and wondered if anyone knew why the docs suggest docker should be configured with iptables=false when using fan networking
[11:17] <xnox> brendand, so you are the new Yo-Yo =) as elmo called me the other day.
[11:35] <farblue> anyone here know about fan networking?
[11:37] <sladen> farblue: sorry know.  A possible reason for disabling iptables is that iptables is already being used to perform the static-nat that FAN-network requries
[11:38] <sladen> farblue: however, I would stress that I don't know, and it's not something I've ever used
[11:39] <farblue> ok, that makes some sense. I’m struggling to work out what the problem is but while my containers can talk to each other and the host can talk to them, they don’t seem able to talk to the outside world :(
[11:39] <sladen> farblue: isn't that the point :)
[11:41] <farblue> well, no, not really. I don’t want anything other than the docker hosts to be able to talk to the containers but it is quite useful for containers to be able to talk out to other services
[11:41] <sladen> kirkland: you are quoted on the PR for the fan-networking/OpenStack stuff.  If you're around, would you be able to follow-up with farblue
[11:43]  * sladen reads with amusement "There are several class A network addresses reserved for Class E".  No, there are several /8s reserved for Class E.
[11:43] <farblue> I think half the problem is that docker does all this ‘magic’ stuff under the hood and I strongly suspect turning off iptables in the docker config does more than expected
[11:45] <sladen> farblue: nb. kirkland is normally on US time, so likely to be around later rather than now
[11:46] <farblue> ok, I’ll keep working on the problem and ask again later :)
[11:48] <sladen> farblue: yes, but please stay on IRC!
[11:48] <farblue> sure :)
[11:49] <farblue> if iptables management is disabled in docker then it doesn’t create the iptables FORWARD rules or create the DOCKER chain or the DOCKER-ISOLATION chain and it doesn’t add the PREROUTING rules - which I suspect is rather more than I need disabled
[13:35] <barry> Laney: hi.  have you had a chance to look at the libpeas PPA?
[13:36] <Laney> hi barry, not yet
[13:36] <Laney> I'll let you know
[13:36] <Laney> inside CSS currently
[13:36] <barry> Laney: cool, no worries
[13:36] <barry> and thanks!
[13:49] <rharper> infinity: pitti: so for the nm/libnl cluster;  where did I mess up in the process?  in the libnl bug, once we found out the nm break happened in -proposed, I built an update (attached v3 of the debdiff to the bug) and requested a sponser;  I can't upload myself.
[13:51] <rbasak> rharper: I think it was adding verification-done. This implies that it's OK to copy the current version in -proposed to -updates.
[13:52] <rharper> rbasak: hrm, yes;  I was pretty sure that we tested proposed; that we had unbroken proposed
[13:52] <rharper> which I thought meant that v3 was in proposed already
[13:53] <rharper> that is, I had to create the v3 with Breaks to *unbreak* anyone testing proposed;
[13:54] <rharper> and ISTR that someone helped unbreak proposed by uploading the  newer version;  but maybe that didn't happen in which case changing the tag wasn't correct;
[13:54] <rbasak> I understand how it went wrong - lots of moving pieces, especially when complicated by the extra coordination needed by sponsorship.
[13:55] <rbasak> I'd say that marking verification-done needs double checking of everything, including what version is where and that everything known is accounted for (in this case the Breaks, together with everything else).
[13:56] <rbasak> I wonder if it would be better to mark verification-failed and start again in this sort of case, to reduce the chance of error.
[13:57] <rharper> rbasak: so, where would I check to ensure the right versions are in place?  pulling the Packages version from the the archive ?
[13:57] <rharper> rbasak: as in start a new bug?  wading back into those it's quite a lot to take in after one has paged it out of memory context
[13:58] <rbasak> rharper: rmadison, or https://launchpad.net/ubuntu/+source/libnl3 etc. That should tell you what version of what is where, and sometimes Launchpad has useful diffs to show you to save you downloading and checking.
[13:58] <rharper> rbasak: cool
[13:58] <rbasak> rharper: not necessarily starting a new bug, but marking verification-failed, having the SRU team delete from -proposed, and re-uploading new fixes or something. That's what I was thinking. I don't know if this is a good idea or not in general, but I feel that it might have reduced the probability of error in this particular case.
[13:59] <rharper> ah
[14:02] <pitti> rharper: I think that should have come up during testing, and indeed I considered teh v-done as "good enough" (when mass-processing SRUs I don't/can't check every detail again)
[14:03] <pitti> rharper: not easy to put a finger on a particular point in the process, it was sort of an emergent corner case between sponsoring, verification, and not prodding about removing a known-bad proposed version
[14:03] <pitti> rharper: i. e. the half-done version should have been removed from the queue, or marked as v-failed after accepting into -proposed, I think
[14:04] <rharper> pitti: yeah; I think not being able to make those changes myself makes things harder since a sponsor has to grok all of what happened as well;
[14:05] <rbasak> rharper: thank you for asking the question BTW. respect++.
[14:05] <pitti> rharper: everyone can mark as v-failed, and everyone needs to prod the archive admins about dropping an upload from the queue
[14:06] <pitti> rbasak: indeed, echoing rbasak; thanks for thinking about the post-mortem
[14:06] <rharper> rbasak: only way to learn =)  also; I felt really bad since it happened while I was out; everyone here was putting out a fire I started.  =(
[14:06] <hallyn> pitti: could the 'systemctl status' -> 'bad' error be related to bug 1579922 ?
[14:07] <pitti> hallyn: yes, very plausibly
[14:07] <pitti> hallyn: I did see teh report and your call for feedback; didn't get to that yet, sorry
[14:09] <hallyn> np.  i assume it has to do with the manual work i do in libvirt-bin.preinst, i suspect i should move another file or something.
[14:09] <rbasak> pitti: it still feels wrong that there's only one person standing in the way of failure here though. Do we need more staffing on the SRU team so closer reviews are possible before accepting v-done?
[14:09] <hallyn> i didn't get around to reproducing with a set of empty dummy packages
[14:09] <pitti> rbasak: not really realistic, I think
[14:10] <pitti> second-guessing v-done would mean a wholly new second v-really-done, and that's pointless as it doesn't add anything new
[14:10] <pitti> rbasak: in this case, more testing results from "real users" would have helped
[14:11] <rbasak> I'm thinking something like "<random> v-done; I checked X and Y"; "<SRU> <reviews what should be checked> what about Z?"
[14:11] <pitti> rbasak: and we run the SRU process under the assumption that some folks run -proposed and report regressions
[14:11] <pitti> which apparently didn't happen despite this being in -proposed for like half a year
[14:11] <rbasak> So trusting results from v-done, but confirming that the right things were checked.
[14:11] <rbasak> In this case we all knew what was needed, so I don't think additional testing would have helped.
[14:12] <rbasak> What I think we needed was clarity over what we already knew.
[14:12] <rbasak> (between us)
[14:12] <pitti> rbasak: to me as an SRU process guy it wasn't at all obvious from this bug that this broke NM
[14:12] <pitti> it was supposed to fix NM :)
[14:12] <rbasak> Well, it fixed libnl3 but exposed a bug in NM.
[14:13] <rbasak> pitti: how about a tag to say "this is complex, needs two full reviewers please?"
[14:13] <rbasak> Most SRUs are trivial compared to this one.
[14:14] <pitti> yeah, SRUs depending on each other are rare
[14:14] <pitti> which is probably why it's easy to forget to add Depends:/Breaks:
[14:14] <rbasak> Doesn't even need SRU team. Just another uploader.
[14:15] <pitti> and given enough luck (installing NM first), even ten testers would have confirmed it works
[14:15] <pitti> the root cause was to introduce a change that depended on a change in NM without saying so
[14:15] <rharper> pitti: we didn't know it broke NM until we uploaded to -proposed; mostly because the testing done was on server (with no NM);
[14:15] <pitti> and that most probably wasn't even known initially
[14:15] <pitti> exactly
[14:15] <rbasak> Another issue is that after the initial problem was revealed, we told testers that we knew about the issue.
[14:16] <pitti> at that point we should have marked this as v-failed
[14:16] <pitti> or ask an AA to pull the package
[14:16] <rbasak> After that test results become muddled because we don't know if the tester accounted for the original regression proposed or not. They might fail the new version but not report because they think we already know because of the failure in the initial version.
[14:17] <rbasak> Yeah. I guess one simple thing to do is to delete from -proposed immediately if we know that it isn't suitable to land in -updates.
[14:17] <rbasak> And if that isn't immediate (eg. SRU team member online to immediately help with a replacement), then v-failed.
[14:18] <pitti> that or tagging v-failed, which can be done by everyone
[14:29] <juliank> infinity: pitti (?): bdmurray (?): Looking for endorsements on https://wiki.ubuntu.com/JulianAndresKlode/DeveloperApplication-PPU (or could I rather apply for core-dev?).
[14:31] <juliank> mvo is not here, which is somewhat unfortunate
[14:33]  * juliank contributes in somewhat unusual ways, so there is not that much evidence of what he does (mostly just told mvo to syncpackage something)
[14:35] <seb128> could somebody from the SRU team look at unity-control-center / unity-settings-daemon in the xenial queue? they are small patches and things the oem team is wanting
[14:48] <juliank> Anyone thought about AppArmor profiles for APT's http and https methods? I'm currently implementing seccomp for the former, but I thought it might make sense to also get some AppArmor safety (not an expert on that, though)
[14:51] <juliank> I should probably also chroot() the method into the partial dir
[14:51] <rbasak> juliank: that sounds like a good idea. AIUI those run in separate processes so should be fairly easy to do?
[14:52] <juliank> rbasak: I think the problem is that it's not clear where they are fetching to. APT has CWD, /var/lib/apt/lists/partial and /var/cache/apt/archives/partial
[14:52] <juliank> But some other programs also use them to fetch stuff into other dirs
[14:53] <rbasak> I see.
[14:53] <juliank> Also we have proxy scripts we need to run :/
[14:53] <juliank> But we could at least use AppArmor to protect against writes to /usr and friends
[14:55] <rbasak> That would certainly be a useful start.
[14:57] <juliank> Although: They currently cannot do that anyway (under normal circumstances) since they run as a separate user
[15:33] <nacc> rbasak: still around?
[15:57] <coreycb> arges, hi, regarding the neutron-* stable release for xenial bug 1580674.
[15:58] <coreycb> xenial and yakkety are currently at 8.0.0-0ubuntu1
[15:59] <coreycb> upstream will release the first beta of 9.0.0 for newton in two weeks which we'll then upload to yakkety
[15:59] <coreycb> ara, can we get an exception to upload the stable release to xenial before that?
[15:59] <coreycb> arges, ^  (sorry ara)
[15:59] <ara> coreycb, no worries :)
[16:19] <infinity> rharper: Absolutely what went wrong is that it should have been marked v-failed, not v-done.
[16:19] <infinity> rharper: It would have flipped back to v-needed when the new version was uploaded (which never happened).
[16:34] <bdmurray> pitti: wrt bug 1446537, would it be possible to tag bugs apport-hook-error that have an attachment named "HookError.*" so it is easier to find issues like this?
[16:48] <farblue> I’m still working with the fan networking and it seems to me that a docker container on the fan network doesn’t have a route out of the fan unless you enable iptables management in docker
[16:51] <farblue> could it be because the fan network is overlay’d on top of a physical ethernet interface that doesn’t actually have a route out anywhere?
[16:51] <farblue> anyhow, home time for me now.
[17:04] <arges> coreycb: i'm not sure what exception you mean. the version in xenial can't be >> than the version in yakkety
[17:10] <coreycb> arges, ok I wasn't sure if there were any exceptions to that rule.  we can probably upload the xenial version to yakkety for the next 2 weeks.
[17:36] <kirkland> farblue: hi -- you're interested in fan networking?
[18:07] <coreycb> arges, can you reject the mitaka neutron* uploads?  there are 4 packages for 8.1.0-0ubuntu1.
[18:07] <coreycb> arges, these are for xenial
[18:09] <arges> coreycb: ok
[18:10] <arges> done
[18:10] <coreycb> arges, thx
[18:39] <bdmurray> pitti: I ended up pushing a change to the ubuntu hook to the yakkety branch of apport re apport-hook-error
[18:42] <sarnold> pitti: thanks for your help yesterday, my timer ran twice without trouble since then :)
[19:05] <rharper> infinity: ok; thanks
[20:03] <shadeslayer> chrisccoulson: hi, I was wondering, https://launchpadlibrarian.net/259046127/firefox_46.0.1+build1-0ubuntu0.16.04.1_46.0.1+build1-0ubuntu0.16.04.2.diff.gz seems to have extra things that are not present in https://code.launchpad.net/~mozillateam/firefox/firefox.xenial
[20:04] <shadeslayer> could you please update the branch? :)
[20:05] <shadeslayer> chrisccoulson: specifically : https://bazaar.launchpad.net/~mozillateam/firefox/firefox.precise/revision/1070
[21:25] <karstensrage> are there any backporters that can help with  #1562434 and #1561837
[21:26] <karstensrage> ive tested them up the wazoo with ppa's and vms
[21:26] <karstensrage> they are a library and a pam module so im not sure how to propagate that information
[21:27] <karstensrage> and its already in xenial, tested and working
[21:28] <karstensrage> i will help anyone with configuring PAM in some way and credentials are required that can be provided as well
[21:54] <karstensrage> or is there someone that would consider testing so that the backporter doesnt have to just take my word for it?
[22:01] <karstensrage> debfx, you possibly, youre the newest approved backporter?
[22:03] <karstensrage> Laney? broder?
[22:03] <sarnold> bueller?
[22:04] <karstensrage> it sure feels like that
[22:05] <Unit193> Tempting to try getting a backport instead of SRU, because the former seems easier and more likely. >_>
[22:07] <karstensrage> SRU?
[22:07] <sarnold> stable release update
[22:07] <Unit193> karstensrage: Different matter at hand, but st..dang.
[22:07] <sarnold> the process for updating existing packages in published releases
[22:07] <karstensrage> getting it in xenial proposed was easy
[22:08] <karstensrage> getting a backport is proving frustratingly impossible
[22:08] <Unit193> karstensrage: FWIW, part of that is limited number of backporters too, iirc.
[22:08] <karstensrage> yeah ive heard it all
[22:09] <Unit193> LP 1582713, LP 1562358.
[22:16] <mapreri> I wonder, is a SRU acceptable just to fix an appstream issue (including bigger icons)?  my pet package is not showing up in that gnome-software thinghy allegedly for this...
[22:20] <sarnold> mapreri: it looks sort of like the infrastructure should be there, there's icons-* appstream packages in the -updates, -security, and -proposed pockets
[22:20] <sarnold> s/packages/blobs/
[22:21] <mapreri> well, that's what I'd expect from an archive that supports DEP-11, I'd be surprised if those files weren't there.