[04:51] <pitti> Good morning
[04:57] <Unit193> Howdy.
[04:58] <Unit193> pitti: You seen LP 1617063?
[04:59] <pitti> Unit193: should have been fixed by https://launchpad.net/ubuntu/+source/netcfg/1.138ubuntu2
[04:59]  * pitti dupes
[05:03] <Unit193> So should be fixed if one has ubiquity 16.10.10, and if it isn't?
[05:07] <Unit193> pitti: What's supposed to be doing that on a live system, btw?
[05:08] <pitti> Unit193: livecd-rootfs creates that netplan conf file
[05:08] <pitti> Unit193: this only affected d-i installs (alternate, netboot)
[05:09] <Unit193> Niiiiice. >_<
[06:25] <cpaelzer> good morning
[06:39] <pitti> xnox: http://autopkgtest.ubuntu.com//packages/s/software-properties/yakkety/amd64 looks like fallout from gnugp2?
[06:52] <jbicha> xnox: but please look into https://code.launchpad.net/~jbicha/software-properties/use-gi-require_version/+merge/304193 before doing another software-properties upload, I've already rebased twice
[08:10] <flexiondotorg> Anyone here who can add an additional package to my PPU package set please?
[08:10] <flexiondotorg> I uploaded all of MATE 1.15 to the Yakkety archive on Friday.
[08:10] <flexiondotorg> But libmatemixer is not in my package set, so was rejected.
[08:11] <flexiondotorg> Now I have broken MATE in Yakkety, because several core components are missing a required package :-(
[08:12] <flexiondotorg> rbasak, Can you help with the above?
[08:24] <rbasak> flexiondotorg: looking
[08:24] <flexiondotorg> rbasak, Many thanks.
[09:02] <pitti> smoser: new c-i works well here (in y); I created a xenial image template with the y cloud-init plus the new invoke-rc.d/service, and now everything is smooth as silk
[09:03] <pitti> smoser: I uploaded the i-s-h SRU and updated bug 1576692, I think everything is done on my end now; anything missing still? (aside from bug 1620780 which is now merely a nuisance rather than a breaker)
[09:07] <mapreri> pitti: could you render the Date in browse-results something more human-readable?  I was looking to write a patch it, but it seems like the date is saved as text in the db, so I can't like strftime() it (at least, not without strptime() it before…).
[09:08] <pitti> mapreri: the run IDs always looked like that, but indeed in debci I massaged it a bit
[09:10] <pitti>       status.date =
[09:10] <pitti>         begin
[09:10] <pitti>           Time.parse(data.fetch('date', 'unknown') + ' UTC')
[09:10] <pitti> that was the ruby equivalent
[09:11] <pitti> err, no, not that
[09:13] <pitti> mapreri: strptime/strftime seems right
[09:13] <mapreri> sec..
[09:13] <pitti> or just re.match() and reformatting it
[09:14] <pitti> mapreri: be prepared for suffixes, though
[09:14] <mapreri> pitti: mean?
[09:14] <pitti> mapreri: e. g. I'll soon add a 20160902_110956.workername
[09:14] <mapreri> arg.
[09:14] <pitti> as sometimes we have test results that finish at the exact same second
[09:14] <mapreri> pitti: in the code you do a .rstrip('@'), is that something related?
[09:15] <pitti> mapreri: so, just taking the first YYYYMMDD_HHMMSS and ignoring everything  else will DTRT
[09:15] <pitti> mapreri: yes, the run ID always ends with @ for technical reasons
[09:15] <mapreri> sounds awful
[09:15] <pitti> (so that you can efficiently query all results in swift without having to list all the individual files)
[09:16] <pitti> mapreri: so by just taking the date/time prefix and ignoring the rest, the @ will automatically be ignored too
[09:16] <pitti> mapreri: and that  format is guaranteed
[09:16] <mapreri> pitti: I could lsplit('@', 1)[0] ?
[09:16] <mapreri> no, wait, @ is always at the end, you said
[09:16] <pitti> mapreri: yes, rstrip is more efficient
[09:17] <pitti> mapreri: but just taking/parsing the date/time prefix is more generic
[09:18] <pitti> >>> time.strptime('20160902_110956.workername@', '%Y%m%d_%H%M%S')
[09:18] <pitti> ValueError: unconverted data remains: .workername@
[09:18] <pitti> me
[09:18] <pitti> meh
[09:19] <mapreri> yeah…
[09:19] <pitti> time.strptime('20160902_110956.workername@'[:15], '%Y%m%d_%H%M%S')
[09:19] <pitti> that works fine
[09:20] <mapreri> umh, i don't particularly like it, though
[09:20] <pitti> >>> time.strftime('%Y-%m-%d %H:%M:%S UTC', time.strptime('20160902_110956.workername@'[:15], '%Y%m%d_%H%M%S'))
[09:20] <pitti> '2016-09-02 11:09:56 UTC'
[09:20] <pitti> ?
[09:22] <mapreri> pitti: https://paste.debian.net/818514/ ?
[09:22] <mapreri> though I obviously can't really test it
[09:23] <pitti> >>> re.sub(r'(\d\d\d\d)(\d\d)(\d\d)_(\d\d)(\d\d)(\d\d).*', r'\1-\2-\3 \4:\5:\6', '20160902_110956.workername')
[09:23] <pitti> '2016-09-02 11:09:56'
[09:23] <mapreri> well…
[09:23] <pitti> that's more direct, avoids the slicing, and converting back and forth
[09:24] <rbasak> flexiondotorg: done. Sorry it took so long. germinate takes a while to run.
[09:24]  * mapreri prefers readability, but feel free to do it the way you prefer :)
[09:25] <mapreri> also, I should be doing something else, alas
[09:27] <pitti> mapreri: done (check the web pages, rolled out)
[09:27] <pitti> mapreri: https://git.launchpad.net/~ubuntu-release/+git/autopkgtest-cloud/commit/?id=25eef3a
[09:27] <mapreri> look cool, thanks! :)
[09:27] <pitti> mapreri: I prefer that as it will just transparently fall back to the raw run_id if the format is unexpected/different for some reasons
[09:28] <mapreri> ok
[09:32] <flexiondotorg> rbasak, Brilliant. Thank you.
[09:41] <flexiondotorg> rbasak, Upload accepted :-)
[10:44] <xnox> pitti, there is fallout indeed, digging deeper into it, there is more than just add-apt-repository failure.
[11:15] <jamespage> ok - can someone explain to me why http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#python-oslo.messaging is complaining that python-oslo-messaging is no longer build
[11:15] <jamespage> python-oslo-messaging was a transitional package for xenial; the archive is switched to python-oslo.messaging, so its ok to drop I think
[11:20] <LocutusOfBorg> jamespage, maybe some archive-admin needs to remove it?
[11:20] <LocutusOfBorg> slangasek, ^^ :)
[11:21] <jamespage> LocutusOfBorg, yeah - I thought so but they are normally awesome at doing that without asking :-)
[11:21] <LocutusOfBorg> AFAIR the process is semi automatic
[11:21] <LocutusOfBorg> not sure what it does exactly mean :)
[11:22] <LocutusOfBorg> you can add it to my bug report https://bugs.launchpad.net/ubuntu/+source/libpng/+bug/1595485
[11:23] <jamespage> maybe I could impose on doko_ if hes around ^^ ?
[11:23] <jamespage> pretty please :-)
[12:32] <pitti> stgraber, xnox: I think lxc's tests also got broken by the move -- "ERROR: Unable to fetch GPG key from keyserver" sounds like that?
[12:33] <xnox> missing dirmngr dependency
[12:33] <pitti> oh, depends vs. recommends?
[12:33] <xnox> as that one is only a recommends, yet is now required for --recv-keys to work
[12:33]  * xnox is not sure if dirmngr should become depends now.
[12:33] <pitti> ah, great that you know about it already -- so should apt depend on that now, perhaps?
[12:33] <xnox> pitti, is that lxd or lxc package?
[12:34] <pitti> xnox: lxc so far (http://autopkgtest.ubuntu.com/packages/l/lxc/yakkety/amd64) - first run after the switch
[12:34] <pitti> http://autopkgtest.ubuntu.com/packages/l/lxd/yakkety/amd64 ran on Friday, that was after the switch (and passed)
[12:37] <xnox> yeap, lxc-templates does --recv-keys
[12:37]  * xnox running tests locally to see if dependencies will fix it.
[12:39] <cpaelzer> is a ppa in status "Pending publication" already fulfilling dependencies for other sources uploaded to the same ppa?
[12:47] <mapreri> pitti: btw, looking at http://autopkgtest.ubuntu.com/packages/d/diffoscope/yakkety/amd64 (but also other (I'd expect faster) architectures.  That run time is incredibly high.  In my local machines the tests run in 3-4 minutes, so does in debian's debci.  Are maybe the workers overloaded or something?
[12:48] <cjwatson> cpaelzer: no
[12:50] <xnox> pitti, for lxc adt tests started https://requests.ci-train.ubuntu.com/#/ticket/1934
[12:58] <pitti> xnox: wow, lxc is being landed through the CI train??
[12:59] <pitti> gpg: keyserver receive failed: No dirmngr
[12:59] <pitti> xnox: ^ apport seems to suffer from the same, but the dirmngr package is installed already; does this require any further deps?
[13:00] <pitti> gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
[13:00] <pitti> oh, maybe that
[13:00] <pitti> there's no reason why this would be running
[13:01] <xnox> pitti, so i believe that gnupg2 is wrong in the sense that when $GNUPGHOME doesn't exist, and it tries to use dirmngr, it doesn't create $GNUPGHOME first.
[13:02] <xnox> in a few places e.g. a dummy $ gpg -k >/dev/null 2>&1 => is executed just to create the $GNUPGHOME with the correct permissions.
[13:02]  * xnox ponders where to file the bug report
[13:02] <pitti> oh, I actually do have a /run/user/1000/gnupg/ in my testbed VM
[13:02] <pitti> and about 15 instances like dirmngr --daemon --homedir /tmp/tmp.h4znVNEFQG
[13:02] <pitti> just nothing in the /run/user/1000/gnupg/ dir
[13:02] <xnox> lol
[13:02] <xnox> well, homedir must be used consistently....
[13:03] <xnox> apport you say?
[13:03]  * xnox looks
[13:03] <pitti> https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-yakkety/yakkety/amd64/a/apport/20160912_102515@/log.gz
[13:03] <pitti> I reproduced in a local VM, currently looking at it
[13:03] <pitti> xnox: indeed I set a temporary $HOME for the tests, to avoid them destroying your real $HOME
[13:03] <pitti> xnox: so you say this might not get exported to some GPG processes then?
[13:04] <xnox> but that doesn't quite get through everywhere, becase - gpg: connecting dirmngr at '/root/.gnupg/S.dirmngr' failed: No such file or directory
[13:04] <pitti> right, that's XDG_RUNTIME_DIR
[13:05] <xnox> gpg --keyring /tmp/foo.gpg --recv-keys => is not enough anymore, as that implies ${GNUPGHOME:$HOME/.gnupg}/S.dirmngr
[13:05] <pitti> and I suppose if neither GNUPGNOME nor HOME exist, it uses $XDG_RUNTIME_DIR?
[13:05] <pitti> or does dirmngr always use that, but runs under a different $HOME?
[13:05] <xnox> i don't believe it ever uses XDG_RUNTIME_DIR
[13:06] <xnox> it uses --homedir, env[GNUPGHOME], env[HOME]
[13:06] <pitti> when I run the test as user, it looks in /run/user/1000/gnupg/S.dirmngr
[13:06] <pitti> which is XDG_RUNTIME_DIR
[13:07] <pitti> actually, I don't change $HOME anywhere in these tests, hmm
[13:07] <xnox>   /* It has been suggested to first check XDG_RUNTIME_DIR envvar.
[13:07] <xnox>    * However, the specs state that the lifetime of the directory MUST
[13:07] <xnox>    * be bound to the user being logged in.  Now GnuPG may also be run
[13:07] <xnox>    * as a background process with no (desktop) user logged in.  Thus
[13:07] <xnox>    * we better don't do that.  */
[13:07] <xnox>   for (i=0; bases[i]; i++)
[13:07] <xnox>     {
[13:07] <xnox>       snprintf (prefix, sizeof prefix, "%s/user/%u",
[13:07] <xnox>                 bases[i], (unsigned int)getuid ());
[13:07] <xnox>       if (!stat (prefix, &sb) && S_ISDIR(sb.st_mode))
[13:07] <xnox>         break;
[13:07] <xnox>     }
[13:07] <xnox> totally does
[13:08] <xnox> but doesn't create the directory first.
[13:08] <xnox> horum, i should stop pasting code
[13:08] <xnox> there clearly is code to create /run/user/$id/gnupg.... if /run/user/$id exists.
[13:10] <pitti> I don't change $HOME anywhere actually, so no idea where that "dirmngr --daemon --homedir /tmp/tmp.Ulx980iySm" comes frmo
[13:10] <pitti> anyway, I'll try to create ~/.gnupg
[13:11] <pitti> gpg: WARNING: unsafe permissions on homedir '/home/ubuntu/.gnupg'
[13:11]  * pitti tries that harder :)
[13:12] <smoser> pitti, your assessment above is right.
[13:12] <smoser> and i will upload an SRU of cloud-init to xenial "right now".
[13:12] <pitti> smoser: good morning
[13:12] <pitti> smoser: yay
[13:13] <pitti> still waiting on LP to import the new init-system-helpers for syncing
[13:13] <pitti> oh, there it is, syncing
[13:13] <pitti> nice, that used to take a lot longer a few weeks ago, so kudos to whoever sped up the imports again
[13:13]  * pitti thanks cjwatson and wgrant
[13:14] <pitti> (and if you didn't do it -- thanks anyway for your great work! ☺ )
[13:16] <pitti> xnox: ok, creating ~/.gnupg helped a lot -- two failures gone, one remaining one that complains about a missing pubkey, but I think that's my fault/new apt
[13:17] <xnox> pitti,  i use $ gpg -k -> to create ~/.gnupg with the right permissions =) (and redirect stdout/err)
[13:21] <cjwatson> pitti: We didn't do anything in particular.  Probably happened to hit more favourable dinstall/mirror timing)
[13:27] <rbasak> balloons: about your proposal to remove other juju arches from the archive, what makes Juju different from every other packaged upstream that doesn't officially support a particular arch but for which we do build on those arches?
[13:39] <balloons> rbasak, my primary reason for removing arches is we are prevented from landing fixes into the archive should one of the 32-bit arches fail to build
[13:41] <balloons> rbasak, until now these arches have been building, but unsupported.
[13:42] <rbasak> balloons: that doesn't answer my question.
[13:45] <balloons> rbasak, juju has to support what the providers support. They are 64-bit only
[13:46] <rbasak> balloons: what about MAAS? Is that 64-bit only?
[13:46] <pitti> the bit that I don't understand yet is why azure support cannot be removed on 32 bit, if it was just added recently
[13:46] <rbasak> Or how about the local provider?
[13:46] <wgrant> Or architectures that aren't 64-bit.
[13:46] <wgrant> eg. powerpc, i386 and armhf
[13:46] <pitti> (which AFAIUI is the only provider that doesn't support 32 bit)
[13:46] <balloons> rbasak, the trouble I feel is doing "best-effort" on primary arch like i386 doesn't make sense
[13:47] <wgrant> It's really useful to be able to use our key deployment technology in parts of the Launchpad build farm, for example.
[13:47] <balloons> local provider is lxd, so yes, it would support 32-bit. MAAS also. The clouds themselves however are 64-bit
[13:47] <rbasak> balloons: I understand your concern, but I don't see why Juju should be special here. The issues you raise apply equally to every other package, no?
[13:47] <pitti> it's not too difficult to drop teh 32 bit builds on yakkety; but impossible on xenial, so we need some transition there, like empty transitional packages with a debconf nor or NEWS at lesat
[13:48] <balloons> rbasak, I am concerned about unsupported builds, but I'm ok with it. As you say, it's not the only package like this. My concern comes in when we are blocked on delivering SRU's because of failing builds on things we don't support
[13:48] <rbasak> balloons: given that we don't do this for other packages, I think we should either identify Juju as special, or do the same for every package.
[13:48] <balloons> in the past if it was something like armhf, we could overlook it and land
[13:49] <balloons> but i386 is so primary.. Anyways, I am certainly open to a solution that makes sure we can land fixes and updates as needed
[13:49] <rbasak> I think that's worth exploring.
[13:50] <balloons> pitti, azure support has been included for some time, but the upstream changed their SDK, which is the reason for the current breakage
[13:51] <pitti> balloons: and the SDK that we have in xenial doesn't work any more?
[13:51] <pitti> or, I figure it's bundled in the juju source
[13:51] <balloons> pitti, I believe it will stop working sometime this year as they deprecate support for it. Obviously the cloud provider has the control around that
[13:53] <balloons> rbasak, so what did you have in mind?
[13:53] <pitti> right, so keeping the old SDK is not an option; disabling azure support sounds better then
[13:53] <pitti> or empty transitinoal packages for xenial, and dropping the arches on y as a last resort
[13:55] <rbasak> balloons: I don't know. I'd want to understand more about the problem first - for example why exactly this problem crops up in the first place - because nobody should be regressing a stable release in an SRU, but an FTBFS where it previously succeeded suggests such a regression. But I'm not on the release team or TB, so I'm just an observer here.
[13:56] <rbasak> balloons: I'm only bothered that your ML post had no replies. I'd prefer to see such a change to have at least a +1 from a release team member before upload, because it feels quite invasive. You did the right thing by bringing it up - thanks.
[14:00] <cjwatson> Dropping powerpc on xenial would definitely be disruptive to our plans for converting remaining builders to scalingstack, as wgrant suggests.
[14:01] <cjwatson> On other architectures we can avoid the problem because there's a 64-bit partner architecture we can run 32-bit code on in compatibility mode, but that isn't the case for powerpc because of the endian difference.
[14:01] <pitti> cjwatson: oh, you run the juju controller in scalingstack?
[14:01] <balloons> rbasak, I also have big concerns over xenial. But given the provider plans to get rid of the old SDK, we're stuck no matter what. Xenial will lose support
[14:01] <balloons> cjwatson, I had no idea launchpad was a 32-bit client consumer
[14:02] <pitti> oh right, this would affect the client as well
[14:02] <balloons> pitti, right remember -- client/agent is really the same
[14:02] <cjwatson> pitti: At the moment they run in privileged containers, but there's a refactoring in progress to make them less privileged so that they're easier to debug and so that it's possible to do things like powerpc in them.
[14:03] <pitti> so how hard is it to build with --disable-azure on 32 bit?
[14:03] <rbasak> balloons: I think you're conflating provider support with architecture support there. If a provider drops support or changes things, I think we already accept that an SRU that deals with that specific change is fine.
[14:03] <cjwatson> balloons: It's not yet, but we had plans for it to be in order to get powerpc builders off bare metal, and avoiding juju in that plan is quite complicated.
[14:03] <cjwatson> balloons: We have no interest in the Azure provider specifically though.
[14:04] <wgrant> pitti: jujud must also run on all the machines that run the charms; this isn't just the controller or client, unfortunately.
[14:06] <wgrant> (I have to wonder how one makes a webservice SDK only work on 64-bit architectures, however!)
[14:06] <pitti> well, not too surprising for a windows-oriented cloud?
[14:07] <wgrant> Unless webservice clients have their own tagged pointer implementations nowadays.
[14:07] <wgrant> Not surprising that they don't use 32-bit themselves, but I'm wondering how one actually goes about breaking 32-bit without doing it deliberately.
[14:07] <pitti> balloons: how hard is it to build with --disable-azure on 32 bit?
[14:07] <balloons> pitti, I think we could quilt patch that
[14:07] <balloons> it would have to be a source modification I believe
[14:11] <balloons> pitti, is there a better way to provide a source modification at build time for a specific arch? And rbasak, if we pursued this idea of netuered packages, is that more palatable to you?
[14:12] <pitti> balloons: easier to make it configurable upstream, or implicitly disable azure support on unsupported arches
[14:12] <pitti> we know upstream, after all :)
[14:13] <pitti> no need for hackpatchery
[14:13] <rbasak> balloons: there are some examples of arch-specific patches being added. You can do it in debian/rules, and keep arch-specific patches somewhere in debian/. And yeah, what pitti said.
[14:13] <balloons> pitti, the issue is go doesn't support build-time config options afaik
[14:13] <pitti> we want to have the upstream CI on those arches too, and with downstream patches these couldn't run
[14:13] <rbasak> balloons: dropping support gracefully in an SRU when a provider changes something that breaks a stable release is fine I think. I don't know if that is what you're seeking here or not.
[14:13] <balloons> hence the patch suggestion if we want this route
[14:13] <rbasak> balloons: but only use of that provider should be affected.
[14:14] <pitti> beisner: well, aside from that the go build system sucks then :), it doesn't need to be explicit -- just build it if it's available/supported and otherwise not?
[14:14] <rbasak> balloons: no other use case should not regress.
[14:14] <rbasak> balloons: no other use case should regress I mean.
[14:15] <pitti> beisner: sorry, tab failure
[14:16] <cjwatson> balloons: I don't know if it's idiomatic, but you should be able to do it with build tags.
[14:17] <balloons> ok, it sounds like we're getting concensus on just removing problematic providers for arches that don't support them
[14:18] <balloons> and yes, perhaps the core team can figure out a way to upstream it so I don't have to patch, but it sounds like either way it should be doable
[14:18] <rbasak> balloons: I don't understand how this fits in with your original FTBFS argument. How does Azure changing something cause an FTBFS in a stable release?
[14:20] <balloons> rbasak, the proposed SRU contains new provider code
[14:20] <rbasak> balloons: ah, so it's not Azure's action that causes the regression, but your own?
[14:21] <rbasak> Then I have doubts.
[14:21] <wgrant> Code for a new provider, or an updated version of an existing embedded code copy?
[14:23] <rbasak> AFAICS, that's a regression the current SRU policy does not permit, so you need a TB exception.
[14:23] <balloons> rbasak, no azure changed there SDK. So we now depend on it / build with it. So yes, we changed our code as well, but only to support there new SDK which we have to migrate to
[14:23] <rbasak> The point of a stable release is to insulate users from upstream changes like this.
[14:24] <balloons> rbasak, right. That was my point of xenial users are kind of stuck. Even if we kept the old code and old SDK, it would stop working (though it would build)
[14:24] <rbasak> You only get a free pass if Azure break compatiblity with their old SDK.
[14:27] <balloons> rbasak, so yes azure sdk broke compatibility and the ability to build on 32-bit. Are you comfortable with an SRU?
[14:28] <rbasak> balloons: so Juju with Azure on Xenial no longer works at all on any architecture?
[14:28] <balloons> cjwatson, wgrant I would encourage you to talk with the core team about your needs / plans of adopting juju on powerpc
[14:32] <balloons> rbasak, it still does currently, but will stop working soon. I don't know the exact date. It also has some real usability issues that have come about because of provider changes -- we monitor command timings for instance
[14:33] <balloons> it was to be December, but again, that's up to azure
[14:33] <rbasak> balloons: if "stop working soon" is published by Azure in a public statement, then I have no objection to dropping support for those bits in an SRU then, though I note again that I am not on the SRU team, release team or TB.
[14:34] <rbasak> It does make sense to handle things in advance for the architectures that will continue to be supported, so users don't face an interruption in functionality. It doesn't make sense to break other archtectures needlessly early, especially if Azure push the breakage date back (presumably due to feedback).
[14:35] <cjwatson> I do think that users shouldn't have to beg for their stuff to continue working over the course of an LTS cycle.
[14:37] <cjwatson> (I'm not close enough to this project from our side to talk to the core team myself; hopefully wgrant can)
[14:43] <balloons> wgrant, please do discuss with the team. I want to make sure they are aware of what you intend to do and how they can / will support it. Let me know if I can help facilitate
[14:44] <balloons> cjwatson, rbasak, re: LTS users. Just remember this is one of the reason I wanted to drop the package. I don't want users to think they are getting juju on 32-bit when they really aren't
[14:45] <cjwatson> balloons: Except apparently they are, just not some providers?
[14:46] <balloons> cjwatson, is it better to not have a package, or have a package in which they lose most functionality over the course of the LTS?
[14:46] <cjwatson> balloons: That depends how much functionality is at risk in reality.
[14:46] <cjwatson> s/at risk/actually going to be removed/
[14:46] <rbasak> balloons: if users are getting juju on 32-bit today, then *they really are*, and we shouldn't break them.
[14:47] <balloons> we can only speculate of course what will happen in 5 years, but it seems my fears are already being realized, and it's still quite early in the cycle
[14:47] <cjwatson> It seems to me that you're using a bit of a rhetorical trick here.
[14:48] <cjwatson> Azure dropping support doesn't really have much bearing on whether Openstack will.
[14:48] <balloons> I don't mean to be tricky; I have the same concerns at heart you do. I just want to make sure we all think through what could happen over the course of the LTS
[14:49] <rbasak> Stuff outside what we ship, such as how Azure or any other public cloud provider behaves, is somewhat out of scope. We are subject to their whims, and users understand that. If they act to break something, then that's a breakage we could not have prevented in a stable release.
[14:49] <cjwatson> If we lose support for the things that are currently available in the future, then we can have that debate then; in the meantime I think we should be able to use the things that currently work.
[14:49] <rbasak> However, stuff like MAAS and Openstack ship *inside* our release. We have control there, and an SRU policy, and we should stick to that.
[14:50] <balloons> I can see losing all external providers, and having just openstack and local provider.
[14:50] <rbasak> So fix them in SRUs. Without breaking other users.
[14:50] <cjwatson> Which would actually be totally fine for our uses.
[14:51] <balloons> If that's an "ok" scenario (while not ideal at all), then sure, I'm aligned with the idea of leaving it in
[14:51] <cjwatson> (I can't speak for other users, obviously, just Launchpad's fairly constrained use case)
[14:52] <balloons> you have to understand though, juju itself may also break on 32-bit. And upstream won't ensure that it won't. So similar to other packages, it may be on distro to ensure builds, or may have to be removed in the worst case
[14:52] <cjwatson> We can cross that bridge if and when we come to it.
[14:52] <balloons> right, I'm giving some worst case scenarios, just to ensure we think about it
[14:53] <balloons> Likely we'll end up somewhere in the middle of the easy and hard paths
[14:53] <cjwatson> 32-bit users are already familiar with those worst-case scenarios, really.  Most of the time it winds up not being a problem, and most of the rest of the time it's not too hard to fix.
[14:54] <balloons> I appreciate everyone's thoughts on this. Thanks for the discussion!
[14:54] <cjwatson> It helps if upstream aren't aggressively making stuff break, though even then it wouldn't be the first time we've managed to cope anyway :-)
[14:55] <semiosis> slangasek: whats the next step for getting the livecd-rootfs changes for vagrant into xenial-updates?
[14:55] <cjwatson> Usually upstream sit in a position that's more like what I think you're describing: they won't test stuff themselves, but they'll take reasonable patches and such
[14:56] <balloons> cjwatson, yes I think that's where this will land. For instance, powerpc will likely break again and will need a reasonable patch to fix that upstream will take
[14:57] <balloons> there's no active vendetta
[15:06] <balloons> cjwatson, wgrant while I have you actually, I have two questions on building snaps via launchpad. The first is I'd like to have multiple branches for a single snap package. One should build to edge; the other to the more stable channels. Is this possible? The second is I would like to build s390. Is this possible?
[15:08] <cjwatson> balloons: (1) Yes, you can create two different snap objects in LP for that; they need to have different names in LP, but they can share a "Registered store package name", and have different channels.
[15:09] <cjwatson> balloons: (2) That's limited to Canonical staff because we don't yet have sufficient build sandboxing on s390x (just like devirtualised PPAs), but it can be set up for such people on request.
[15:15] <kenvandine> doko_, btw the libphonenumber silo is pending for QA now
[15:16] <balloons> cjwatson, (1) I will try again now and ping you if I can't get it to work again. (2) How should I make a request for the juju snap in particular?
[15:18] <cjwatson> balloons: https://answers.launchpad.net/launchpad/+addquestion is usually best, with a URL to the snap you want reconfigured
[17:04] <xnox> pitti, at https://requests.ci-train.ubuntu.com/static/britney/landing-1931/yakkety/excuses.html it says that "systemd" and "lava-server" tests are running, but i don't see them in the queue nor any results. Have they been lost somewhere?
[17:04] <xnox> i think i will release the new qemu, but it is weird that tests gone MIA
[17:06] <jgrimm> xnox, what's the new qemu?
[17:08] <jgrimm> ah, 2.6.1. nvm
[17:09] <nacc> xnox: is that for yakkety, i assume?
[17:09] <xnox> jgrimm, drop most patches, and upload "2.6.1" tarball. The net delta is just a few patches that were not cherrypicked.
[17:09] <xnox> nacc, yeah.
[17:10] <jgrimm> xnox, cool cool. thanks.  asking as i have a FFe open for set of patches that are ppc64 specific.
[17:10] <nacc> xnox: ack, i think there was one bug that'd get closed by that, let me see if i can find it -- LP: #1617055
[17:10] <nacc> heh, i see you already responded there, nm!
[17:11] <nacc> xnox: thanks for doing that -- i had talked with hallyn about it, and he suggested to me we should follow debian for qemu updates, hence my initial reply
[17:13] <xnox> yeah, but the diff between 2.6.1 final and all-patches-applied is minimal, especially after the CVE uploads from security team.
[17:18] <xnox> pitti, where abouts is the new autopkgtest? i think we want to expand the stats page to show things per-arch & per-release, rather than just per-release.
[17:31] <nacc> xnox: sure, make sense, thanks!
[17:31] <nacc> rbasak: you don't happen to still be around?
[18:06] <rbasak> nacc: yes
[18:10] <nacc> rbasak: hey! have a few minutes? importer questions for you
[18:10] <nacc> rbasak: not high priority, so it can wait, of course
[18:24] <hallyn> xnox: nacc: if you haven't already done so, i recomment pushing the new branch to http://anonscm.debian.org/cgit/pkg-qemu/qemu.git #ubuntu-dev
[18:25] <hallyn> then perhaps telling mjt on #debian-qemu about it, maybe he can git merge
[18:27] <rbasak> nacc: please ask. I have a DMB meeting in half an hour so I'm roughly around.
[18:51] <xnox> hallyn, i'm still working out what to revert, because a CVE patch regresses things
[18:57] <nacc> hallyn: ack, thanks for that info!
[18:57] <nacc> rbasak: sure, sorry, i went afk myself for a bit
[18:57] <nacc> rbasak: LP: #1618898, i'm not sure what our solution would be to that case
[18:58] <nacc> rbasak: and re: the isc-dhcp import issues, i'm not sure if we ever came up with a good solution? (specifically the fact that version/series/pocket does not uniquely identify an upload)
[19:01] <elbrus> has anybody experience with libreoffice timing out during build?
[19:01] <elbrus> winff builds fine in Debian, but not in Ubuntu
[19:01] <elbrus> it seems to time out during pdf creation
[19:02] <rbasak> !dmb-ping
[19:02] <elbrus> https://launchpadlibrarian.net/283037524/buildlog_ubuntu-yakkety-amd64.winff_1.5.3-7_BUILDING.txt.gz
[19:03] <elbrus> (has been retried at least twice)
[19:05] <jbicha> elbrus: I suggest asking Sweet5hark in #ubuntu-desktop (but he might be gone for the day)
[19:06] <elbrus> jbicha: thanks, will try
[21:12] <sarnold> doko_: jfyi, in the hopes that this may save you some time and hassle some day :) http://www.mono-project.com/news/2016/09/12/arm64-icache/
[23:45] <nacc> rbasak: also, i think I realized today that I didnt' change the state on MR: #297709
[23:45] <nacc> rbasak: are you ok with me uploading the fixes for the existing openipmi bugs to yakkety and then doing SRUs to xenial; or do you think it's worth pursuing the merge and FFe?
[23:46] <nacc> rbasak: i think it'd be fine to do an updated merge once z opens up, but i'd like your feedback