[01:10] <thumper> menn0: https://github.com/juju/retry/pull/2
[01:11] <menn0> thumper: looking
[01:11] <thumper> cheers
[01:11] <thumper> wallyworld: do you know if the ci bot is even looking at master?
[01:11] <thumper> seems that the bug was fixed by cmars some time ago
[01:12] <wallyworld> thumper: they can only target one branch at a time, maybe 1.25 has been getting priority
[01:12] <thumper> probably...
[01:12]  * thumper sighs
[01:12] <thumper> FARK!!!
[01:12] <thumper> peergrouper test
[01:12] <thumper> damnit
[01:13] <thumper> hate it!
[01:16] <davecheney> thumper: it _never_ passes for me
[01:16] <thumper> it mostly fails for me too
[01:16] <thumper> saying that, it just passed
[01:16] <thumper> if I run it by itself
[01:16] <thumper> second time it failed
[01:31] <menn0> thumper: review done. I ended up deleting and changing a lot of my initial comments so ignore the notification emails and go with what's on the PR
[01:31] <menn0> thumper: summary is I think MaxWait is a strange parameter to have. MaxDuration seems more intuitive to me.
[01:32] <menn0> thumper: I'd want to be able to say, "keep trying for up to 10 mins", not "keep trying while the sum of the waits between attempts is less than 10 mins"
[01:33] <menn0> thumper: if Func doesn't take long they work out to be about the same but I bet that sometimes Func can take a while making it hard to figure out exactly how long a series of attempts might take
[02:52] <thumper> menn0: yeah... I did wonder about that bit too
[02:52] <thumper> menn0: whether or not to take into consideration the time taken to call Func
[02:53] <thumper> menn0: although... it makes testing harder :)
[02:54] <thumper> menn0: because we need something to advance the clock...
[02:54] <menn0> thumper: if time is recorded when retry.Call is first called then it's easy to know how long everything has taken so far
[02:54] <thumper> menn0: however with the mock clock, it is um... different
[02:54] <menn0> the test Clock can do that right?
[02:54] <thumper> hmm...
[02:54] <thumper> we can make it do that
[02:54] <thumper> just by keeping track of now
[02:55] <thumper> and advancing now every time we are asked to sleep :)
[02:59] <thumper> menn0: here is a mind fcukingly boring review: https://github.com/juju/juju/pull/3605
[03:02] <thumper> ugh...
[03:03] <thumper> master CI failed due to deploying a local charm failing
[03:03] <thumper> WTF?
[03:21] <menn0> thumper: was otp, looking now
[03:25] <menn0> thumper: review done... super exciting that one
[03:25] <thumper> :)
[04:52] <mup> Bug #1510787 opened: juju upgrade-charm errors when it shouldn't <juju-core:Triaged> <https://launchpad.net/bugs/1510787>
[08:22] <mup> Bug #1509747 opened: Intermittent lxc failures on wily, juju-template-restart.service race condition <bug-squad> <cloud-installer> <lxc> <wily> <juju-core:Confirmed> <systemd (Ubuntu):Confirmed for pitti> <https://launchpad.net/bugs/1509747>
[08:25] <mup> Bug #1509747 changed: Intermittent lxc failures on wily, juju-template-restart.service race condition <bug-squad> <cloud-installer> <lxc> <wily> <juju-core:Confirmed> <systemd (Ubuntu):Confirmed for pitti> <https://launchpad.net/bugs/1509747>
[08:34] <mup> Bug #1509747 opened: Intermittent lxc failures on wily, juju-template-restart.service race condition <bug-squad> <cloud-installer> <lxc> <wily> <juju-core:Confirmed> <systemd (Ubuntu):Confirmed for pitti> <https://launchpad.net/bugs/1509747>
[08:37] <mup> Bug #1509747 changed: Intermittent lxc failures on wily, juju-template-restart.service race condition <bug-squad> <cloud-installer> <lxc> <wily> <juju-core:Confirmed> <systemd (Ubuntu):Confirmed for pitti> <https://launchpad.net/bugs/1509747>
[08:43] <mup> Bug #1509747 opened: Intermittent lxc failures on wily, juju-template-restart.service race condition <bug-squad> <cloud-installer> <lxc> <wily> <juju-core:Confirmed> <systemd (Ubuntu):Confirmed for pitti> <https://launchpad.net/bugs/1509747>
[09:29] <voidspace> dimitern: ping
[09:29] <voidspace> dimitern: do you know how I can reproduce a situation where "ignore-machine-addresses" is *required*?
[09:29] <voidspace> dimitern: (i.e. reproduce the bug it fixes)
[09:32] <dooferlad> frobware: ping
[09:32] <frobware> dooferlad, omw
[09:37] <dimitern> voidspace, I can think of an example
[09:38] <dimitern> voidspace, how about: bootstrap, juju ssh 0, add a virtual NIC with an address like 10.0.0.2, watch the logs to see it gets picked up and set among the API host ports
[09:39] <dimitern> voidspace, it won't work just by installing the lxc package outside of juju, as we're detecting the presence of /etc/default/lxc-net and parsing it to find the LXC bridge device name, then filter any addresses on it
[09:40] <voidspace> dimitern: ok, I'll try that
[09:40] <dimitern> voidspace, however, you can try: sudo apt-get install lxc -y after bootstrapping, then rm /etc/default/lxc-net, and reboot the node
[09:40] <voidspace> dimitern: our proposed fix from yesterday won't work - it's what we're already doing!
[09:40] <dimitern> (or just reboot jujud)
[09:41] <voidspace> dimitern: I think the fix might have to be that we prefer a *fallback address* from the provider list in favour of an exact scope match from machine
[09:41] <voidspace> dimitern: better would be to filter the addresses we send from the machine, so we don't send unusable addresses
[09:42] <dimitern> voidspace, we do that for lxcbr0 addresses, if we can detect them
[09:42] <voidspace> right
[09:43] <voidspace> better, we should ignore machine addresses by default and understand the circumstances where we actually need them
[09:43] <voidspace> (and use them only where we need them)
[09:43]  * dimitern is thinking whether a fallback match from the provider addresses can always be preferred to an exact machine address match 
[09:43] <voidspace> dimitern: we can topic it in standup again
[09:43] <dimitern> voidspace, ok
[09:44]  * voidspace is going to get coffee
[09:53] <dimitern> voidspace, hey, do you remember that change to "devices new" API that needed type=container in the arguments in order to show up in the UI?
[09:56] <voidspace> dimitern: I remember, but that isn't quite right
[09:56] <voidspace> dimitern: they have removed devices with a parent from the ui and last I saw they hadn't got round to putting them back
[09:56] <voidspace> dimitern: they're going to put them on the node detail pages
[09:56] <voidspace> dimitern: they decided not to use "type=container" afaik
[09:56] <voidspace> dimitern: so we don't need to do anything
[09:57] <voidspace> dimitern: but *currently* devices with a parent don't show in the ui at all
[09:57] <voidspace> which sucks
[09:57] <dimitern> voidspace, right, ok - as I couldn't find a reference to the type=container argument in maas src (1.8 or trunk)
[09:58] <voidspace> dimitern: if you find the maas bug (I can't recall it) they changed the title to remove the reference to type=container
[09:58] <voidspace> dimitern: they changed the title to "remove devices with a parent from the devices page"
[09:58] <voidspace> dimitern: and they have a new, not-yet-fixed bug, to add them to the node details page
[09:59] <voidspace> wallyworld: ping
[09:59] <dimitern> voidspace, I see, well - as long as the device is registered and maas can clean it up with its parent, I don't care so much it's not visible in the UI
[10:00] <voidspace> dimitern: I think it's a shame
[10:00] <voidspace> dimitern: I liked the visibility
[10:00] <voidspace> but ah well
[10:00] <dimitern> voidspace, it is a shame, but not quite ours ;)
[10:02] <dimitern> TheMue, jam, standup?
[10:02] <jam> brt
[10:37] <voidspace> dooferlad: I assume you checked those USB ethernet adaptors work with Ubuntu... ?
[10:37] <dooferlad> voidspace: reports say yes.
[10:37] <voidspace> dooferlad: cool, thanks
[10:39] <voidspace> dooferlad: and *not* the 4x SATA SSD - one disk is sufficient?
[10:39] <dooferlad> voidspace: we have been told to just go msata and the charm will be updated to use one disk
[10:40] <voidspace> dooferlad: ok
[10:40] <voidspace> dooferlad: those msata disks look funky
[10:40] <dooferlad> voidspace: overnighting a disk later if needed is a tradeoff we seem to be OK with.
[10:40] <voidspace> cool
[10:40] <dooferlad> voidspace: just plugged in one of those USB3 adapters. Shows up as enx00e060001127 on my desktop in the output of ifconfig
[10:41] <voidspace> dooferlad: great
[10:47] <mup> Bug #1510875 opened: unable to interrupt 'juju boostrap' on MAAS before the node is running <bootstrap> <juju-core:Triaged> <https://launchpad.net/bugs/1510875>
[10:50] <mup> Bug #1510875 changed: unable to interrupt 'juju boostrap' on MAAS before the node is running <bootstrap> <juju-core:Triaged> <https://launchpad.net/bugs/1510875>
[10:56] <mup> Bug #1510875 opened: unable to interrupt 'juju boostrap' on MAAS before the node is running <bootstrap> <juju-core:Triaged> <https://launchpad.net/bugs/1510875>
[11:01] <voidspace> dooferlad: and those nucs will be fine with my existing PDU
[11:02] <frobware> voidspace, AMT based NUCs, correct?
[11:03] <voidspace> frobware: non-AMT I believe
[11:03] <dooferlad> frobware: they are AMT
[11:03] <voidspace> hah
[11:03] <voidspace> frobware: believe dooferlad and not me...
[11:03] <frobware> voidspace, ok, just checking as if they're AMT based they can be shared...
[11:04] <frobware> voidspace, with other rigs should priorities change, et al.
[11:05] <dimitern> frobware, hw ordered; doc updated with actual prices (with exp. delivery to BG it turned out a bit more than 100 GBP cheaper!)
[11:05] <frobware> dimitern, great
[11:05] <dimitern> I mean, not due to the delivery, just price changes I guess
[11:06] <frobware> dimitern, dooferlad, voidspace: I suspect there's some surge pricing for those components going on in some amazon spreadsheet... :)
[11:06] <dooferlad> frobware: Amazon ran out of NUCs
[11:06] <dimitern> :D
[11:06] <dooferlad> frobware: so now they are £327 instead of £220
[11:06] <frobware> dooferlad, wow
[11:07] <dooferlad> frobware: now sold from Kikatek for that delightful mark up
[11:07] <dimitern> dooferlad, what I ordered is a bare bones kit - no ram, ssd
[11:07] <frobware> dimitern, which is pretty much the default, no?
[11:07] <dooferlad> dimitern: yes, that is what I got
[11:08] <dimitern> frobware, it is as offered from intel, but some vendors spice it up
[11:08] <dooferlad> http://www.intel.com/content/www/us/en/nuc/nuc-kit-dc53427hye-board-d53427rke.html
[11:12] <voidspace> dammit, I had some in my basket from amazon
[11:12] <voidspace> but too late to hit the button
[11:12] <voidspace> so I should probably cancel that order - unless I order the NUCs elsewhere at the higher price
[11:12] <voidspace> or just wait for amazon to get new stock
[11:13] <dooferlad> voidspace: amazon stock is due in weeks, right?
[11:13] <voidspace> dooferlad: allegedly
[11:13] <voidspace> dooferlad: I'm happy to wait, so long as they actually arrive
[11:14] <frobware> voidspace, I'm hoping we can make some progress with vMAAS and a smaller bundle to begin with
[11:17] <voidspace> frobware: yep, that would be good
[13:59] <mup> Bug #1510132 changed: redefinition of ‘noMethods$MarshalYAML$hash’ <blocker> <ci> <ppc64el> <regression> <testing> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1510132>
[13:59] <mup> Bug #1510944 opened: Only state-server upgrades from 1.20 <intermittent-failure> <upgrade-juju> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1510944>
[14:09] <sinzui> cherylj: there is a regression in 1.22 https://bugs.launchpad.net/juju-core/+bug/1510952
[14:09] <mup> Bug #1510952: Upgrades broken in 1.22 tip <blocker> <ci> <regression> <upgrade-juju> <juju-core:Incomplete> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1510952>
[14:23] <cherylj> sinzui: thanks for the heads up
[14:24] <cherylj> sinzui: has that test run on 1.24 yet?
[14:25] <sinzui> cherylj: no, it hasn't
[14:25] <sinzui> It will test next I think
[14:26] <cherylj> ok, thanks
[14:32] <mup> Bug #1510951 opened: Upgrades broken in 1.22 tip <blocker> <ci> <regression> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1510951>
[14:33] <mup> Bug #1510952 opened: Upgrades broken in 1.22 tip <blocker> <ci> <regression> <upgrade-juju> <juju-core:Incomplete> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1510952>
[14:40] <natefinch> fwereade: I added setting of unit statuses in the last changes to the unit assigner review: http://reviews.vapour.ws/r/2814/
[14:42] <fwereade> natefinch, awesome, thanks
[14:42] <natefinch> er, in the last two changes, that is.
[15:02] <alexisb> dimitern, ping
[15:02] <dimitern> alexisb, oh, sorry - omw
[15:05]  * fwereade needs to stop early, will be back this evening to talk to NZ
[15:16] <frobware> dooferlad, having just spoken with cherylj we should look at 1.24 first for #1510651
[15:16] <mup> Bug #1510651: Agents are "lost" after terminating a state server in an HA env <bug-squad> <ensure-availability> <juju-core:Triaged by dooferlad> <https://launchpad.net/bugs/1510651>
[15:16] <dooferlad> frobware: thanks, spotted the bug update
[16:42] <natefinch> wwitzel3: I know you're there, I can hear you typing
[16:56] <natefinch> ericsnow: you around?
[16:56] <ericsnow> natefinch: yep
[16:57] <natefinch> ericsnow: the lxd remote config value - I presume that should be an IP address or domain name?
[16:57] <ericsnow> natefinch: yep
[16:57] <ericsnow> natefinch: I'll make that more clear
[16:58] <cherylj> perrito666: got a sec?
[17:01] <natefinch> ericsnow: I'm adding an isLocal() function to the environ, since we're checking that in a couple places
[17:01] <ericsnow> natefinch: k
[17:01] <natefinch> ericsnow: (the logic is simple, but I don't want remote() == "" getting tossed around the codebase everywhere
[17:01] <ericsnow> natefinch: fine with me
[17:01] <ericsnow> natefinch: method on environConfig?
[17:03] <natefinch> ericsnow: on the environ. I don't want to have to know to look at the config to determine if it's local.  There's some law about not doing foo.bar.baz()... forget now what it's called.  Just expose it at the higher level.  The config itself doesn't need to know about local or not, I don't think.
[17:05] <ericsnow> natefinch: you're doing foo.bar.<something> either way and the config is what identifies the remote as local or not
[17:06] <mup> Bug #1507771 changed: Juju deploy fails when template container cannot be started <cdo-qa> <lxc> <maas> <juju-core:Invalid> <https://launchpad.net/bugs/1507771>
[17:07] <natefinch> ericsnow: if you're calling it on the environ, then only environ needs to know that it's getting information from the config.  Everywher else just calls the function on the environ.   And the environ is the thing that is interpreting the config.  The config is just holding the data.  Adding isLocal to the config is putting environ-level logic in the data bag that is the config
[17:08] <ericsnow> natefinch: k
[17:08] <ericsnow> natefinch: not a big deal to me either way :)
[17:08] <ericsnow> natefinch: the method is useful regardless
[17:09] <natefinch> ericsnow: agreed :)
[17:10] <natefinch> ericsnow: fwiw, I started with it on the config and then decided it was probably more appropriate on the environ... but yeah, it's not a huge deal either way
[17:12] <mup> Bug #1507771 opened: Juju deploy fails when template container cannot be started <cdo-qa> <lxc> <maas> <juju-core:Invalid> <https://launchpad.net/bugs/1507771>
[17:15] <mup> Bug #1507771 changed: Juju deploy fails when template container cannot be started <cdo-qa> <lxc> <maas> <juju-core:Invalid> <https://launchpad.net/bugs/1507771>
[17:30] <natefinch> ericsnow: your dependency injection makes my goto definition useless :/
[17:30] <ericsnow> natefinch: you're using a goto?
[17:31] <natefinch> ericsnow: yeah, godef
[17:31] <natefinch> ericsnow: which will just go to the definition of the interface.. there's no real way around it
[17:31] <natefinch> (I think the go oracle has a way to find implementors of an interface, but I haven't been able to get that working)
[17:32] <ericsnow> natefinch: ah
[17:32] <ericsnow> natefinch: yeah, bummer
[17:32] <natefinch> ericsnow: the downside of interfaces everwhere... figuring out what code is actually getting called is hard
[17:40] <ericsnow> katco: you have any ideas on a name to use instead of "clientServerMethods"?
[17:50] <perrito666> cherylj: I do now
[17:51] <cherylj> perrito666: just wanted to follow up on bug 1507867
[17:51] <mup> Bug #1507867: juju upgrade failures <canonical-bootstack> <upgrade-juju> <juju-core:Triaged by mfoord> <https://launchpad.net/bugs/1507867>
[17:51] <cherylj> it looks like there were two pieces:  the ignore machine address for containers, and the mongo issues
[17:51] <cherylj> I see mfoord took the machine addresses / containers part
[17:51] <cherylj> are you still working the mongo side?
[17:51] <cherylj> or were you going to hand that off to sapphire?
[17:52] <perrito666> indeed, on the mongo issues we are stil waiting some logs that might shed light on the mather sadly the resources of bootstck are  a bit scarce right now and they cannot run the test yet, I intended to do the handoff once these logs where in place but I might as well do it now
[17:53] <cherylj> perrito666: should we open up a new bug to track just the mongo stuff?
[17:53] <perrito666> cherylj: that would be advisable looking again at the current one it is becoming a bit of a mix
[17:54] <cherylj> perrito666: okay, I will create one and try to extract the meaningful mongo bits
[17:55]  * perrito666 is wondering why he came to work to the only coffee shop without internet in town
[18:21] <katco> ericsnow: well, 3 of them have to do with certificates, so there's a cert struct/interface in there for sure
[18:22] <katco> ericsnow: the wait and config... perhaps belong in 2 separate interfaces?
[18:25] <ericsnow> katco: clientServerMethods is just there to help organize the methods better (so the Client methods aren't split across several files)
[18:26] <ericsnow> katco: in my mind the name communicates that pretty well
[18:27] <katco> ericsnow: well, first, grouping them into a struct doesn't preclue them being split across files :)
[18:28] <katco> ericsnow: what do you mean by, "organize the methods"?
[18:28] <ericsnow> katco: my point is that they were methods of Client before I moved them under their own struct (which Client now embeds)
[18:28] <ericsnow> katco: grouping the different Client methods
[18:28] <ericsnow> katco: and splitting them across multiple files
[18:29] <katco> ericsnow: ok. so i think i'm just asking that you take that to its logical conclusion and group them into types that make sense
[18:29] <katco> ericsnow: "server" methods is too generic
[18:29] <ericsnow> katco: I was following the grouping I was already using (see conn_raw.go)
[18:31] <katco> ericsnow: i think you're grouping in terms of implementation, and it probably makes more sense to group in terms of what the group of methods do
[18:32] <katco> ericsnow: you even have comments grouping them
[18:33] <ericsnow> katco: right
[18:43] <ericsnow> katco: ah, so your concern is with "Server" rather than with the whole name?
[18:44] <katco> ericsnow: well, the client should take implementations of interface it needs
[18:44] <katco> ericsnow: so no, the whole thing
[18:45] <katco> ericsnow: the fact that it's for the client is implicit because the client is embedding the types
[18:46] <ericsnow> katco: all the client needs is the raw *lxd.Client (as returned by newRawClient())
[18:46] <ericsnow> katco: I can change it back so the clientServerMethods methods are methods of Client instead if you like
[18:47] <ericsnow> katco: that may be less confusing
[18:47] <ericsnow> katco: (though it splits Client methods across multiple files)
[18:47] <katco> ericsnow: no i think what you've done is a good thing, you're doing IoC
[18:47] <katco> ericsnow: i'm just saying split things out to be a little more fine-grained
[18:48] <ericsnow> katco: but it's not meant to be IOC
[18:48] <katco> ericsnow: well it is :) and it's a good thing
[18:48] <ericsnow> katco: IOC here would be providing the raw client as an interface (or multiple)
[18:49] <katco> ericsnow: that's exactly what i'm saying
[18:50] <ericsnow> katco: right; that's orthogonal to the clientServerMethods type
[18:50] <ericsnow> katco: I wasn't planning on doing any IOC yet with the raw client
[18:51] <katco> ericsnow: you've extracted out the functionality, but haven't encapsulated it in a way that makes sense
[18:51] <ericsnow> katco: my point is that I haven't extracted out any  functionality; it was already there as-is
[18:52] <katco> ericsnow: you extracted it out into another type, right?
[18:52] <ericsnow> katco: the only thing I did was move the Client methods in a given file into their own type so that I wouldn't have Client methods spread across mutliple files
[18:52] <katco> ericsnow: that is extracting the functionality out, isn't it?
[18:58] <katco> ericsnow: it's fine for now. if you just want to collect them in the same file, leave it on client and put them all in a file together
[19:00] <ericsnow> katco: hmm, one goal is to avoid massive files like that though
[19:00] <katco> ericsnow: no i mean, same type, new file
[19:00] <ericsnow> katco: that's the way I had it in the first place :)
[19:01] <ericsnow> katco: but having a type split across multiple files is a pain point
[19:01] <katco> ericsnow: that's better than a new type that isn't encapsulating properly :)
[19:01] <katco> ericsnow: it is?
[19:02] <ericsnow> katco: it has been for me
[19:11] <natefinch> katco, ericsnow: I definitely highly dislike having a single type split across files
[19:13] <natefinch> katco, ericsnow: it makes it harder to understand the type as a whole, and it means you have to go hunting through files to find the method you're interested in.
[19:20] <cherylj> perrito666: it looks like there was another bug spun off to track the ignore-machine-addresses with containers.  Let's keep bug 1507867 for the mongo issues.
[19:20] <mup> Bug #1507867: juju upgrade failures <canonical-bootstack> <upgrade-juju> <juju-core:Triaged by mfoord> <https://launchpad.net/bugs/1507867>
[19:20] <perrito666> sounds fair
[19:20] <cherylj> perrito666: I'm going to put in an update that we are waiting on logs from bootstack and assign the bug to you
[19:21] <cherylj> once you have a chance to talk with someone from sapphire to do a hand off, please update the bug assignee
[19:21] <perrito666> ok, Ill do a handoff as soon as we get these
[19:21] <cherylj> awesome, thanks!
[19:21] <perrito666> everything is awesome
[19:21]  * perrito666 sings
[19:21] <cherylj> everything is cool when you're part of a team?
[19:40] <alexisb> wwitzel3, ping
[19:47] <mup> Bug #1511090 opened: MAAS documentation on jujucharms.com incorrectly advised disable network management <juju-core:New> <https://launchpad.net/bugs/1511090>
[19:57] <wwitzel3> alexisb: pong
[19:59] <alexisb> heya wwitzel3 I was going to ask you to reach out to adam, but you did
[19:59] <alexisb> wwitzel3, also do you have documentation regarding deploying openstack on lxd provider?  or is that someone else?
[19:59] <alexisb> I heard it now exists
[20:01] <wwitzel3> alexisb: we have an email chain between James Page and myself, but we didn't have any official docs yet since it is a lot tweaking still.
[20:05] <alexisb> wwitzel3, ack
[20:05] <alexisb> thanks!
[20:05] <ericsnow> natefinch: ptal: http://reviews.vapour.ws/r/2927/
[20:10] <natefinch> ericsnow: looking
[20:11] <ericsnow> natefinch: ta
[20:29] <fwereade> thumper, waigani, just joined onyx-standup
[20:42] <mup> Bug #1511103 opened: relation-get error: permission denied <juju-core:New> <https://launchpad.net/bugs/1511103>
[20:48] <mup> Bug #1511103 changed: relation-get error: permission denied <juju-core:New> <https://launchpad.net/bugs/1511103>
[20:54] <mup> Bug #1511103 opened: relation-get error: permission denied <juju-core:New> <https://launchpad.net/bugs/1511103>
[22:24] <mup> Bug #1511135 opened: storage: add bundle support <juju-core:Triaged> <https://launchpad.net/bugs/1511135>
[22:27] <mup> Bug #1511135 changed: storage: add bundle support <juju-core:Triaged> <https://launchpad.net/bugs/1511135>
[22:33] <mup> Bug #1511135 opened: storage: add bundle support <juju-core:Triaged> <https://launchpad.net/bugs/1511135>
[22:36] <mup> Bug #1511138 opened: Bootstrap with the vSphere provider fails to log into the virtual machine <bootstrap> <cloud-init> <vsphere> <juju-core:Triaged> <https://launchpad.net/bugs/1511138>
[22:39] <mup> Bug #1511138 changed: Bootstrap with the vSphere provider fails to log into the virtual machine <bootstrap> <cloud-init> <vsphere> <juju-core:Triaged> <https://launchpad.net/bugs/1511138>
[22:54] <mup> Bug #1511138 opened: Bootstrap with the vSphere provider fails to log into the virtual machine <bootstrap> <cloud-init> <vsphere> <juju-core:Triaged> <https://launchpad.net/bugs/1511138>
[23:20] <ericsnow> cmars: so, you like my latest LXD branch, huh? :)
[23:21] <ericsnow> cmars: wwitzel3 has one up that fixes the remaining bugs you noted: http://reviews.vapour.ws/r/3015/
[23:21] <cmars> ericsnow, wwitzel3 you guys rock
[23:21] <ericsnow> cmars: hey, it's a fun one to work on :)
[23:58] <wwitzel3> indeed