[02:30] <axw> wallyworld: http://reviews.vapour.ws/r/5045/
[02:30] <wallyworld> ok, ta
[02:39] <wallyworld> axw: while i'm looking, here's a small one http://reviews.vapour.ws/r/5046/
[02:39] <axw> wallyworld: okey dokey
[02:55] <wallyworld> axw: i wonder if migration should fail if the model credential does not exist in the target controller - should that check be done? i think so?
[02:56] <axw> wallyworld: yeah, there's a TODO in NewModel to check that
[02:56] <wallyworld> ah right
[02:56] <axw> wallyworld: intending to do in a follow up, can do in this branch if you prefer
[02:56] <wallyworld> follow up fine
[02:56] <wallyworld> it's all stuff immentently in progress
[03:07] <thumper> wallyworld: got a few minutes?
[03:07] <wallyworld> thumper: sure, just finishibg a review, give me 5
[03:07] <thumper> wallyworld: ack, I'll jump in 1:1 and wait
[03:14] <wallyworld> axw: i left a few questions / suggestions
[03:14] <axw> ta
[03:15] <menn0> thumper: found one typo. otherwise LGTM.
[03:15] <axw> wallyworld: I was copying "CloudRegion() string" above in core/description. I can change both to ...Name() if you like, but IMO it's clear enough as it is
[03:15] <wallyworld> axw: ok
[03:15] <thumper> menn0: awesome
[03:15] <thumper> ta
[03:36] <axw> wallyworld: please see replies
[03:36] <wallyworld> ok
[03:39] <wallyworld> axw: lgtm, we should think about the cloud config issue though
[03:40] <axw> wallyworld: okey dokey. perhaps we could/should separate clouds.yaml management from the core representation of clouds/regions
[03:41] <axw> wallyworld: maybe move clouds.yaml management into jujuclient? and then config-free version in cloud
[03:41] <axw> just a thought... need to think through more
[03:41] <wallyworld> yep, i was thnking similar
[03:53] <thumper> bah humbug
[03:54] <thumper> wallyworld: where has the password gone from agent.Config ?
[03:54] <wallyworld> thumper: the api password? the one that used to be old-password?
[03:55] <thumper> there is no an old password, and no other passwords
[03:55] <wallyworld> it's in accounts.yaml
[03:56] <thumper> not on machine-0
[03:57] <wallyworld> no, i think it was removed, but i'd need to check
[03:58] <wallyworld> i'm late for vet but i'm pretty sure it was removed from agent config because it's just not needed there
[03:58] <wallyworld> it is stored in the accounts yaml of the person who bootstrapped
[03:58] <thumper> yeah...
[03:58] <thumper> it is
[03:58] <wallyworld> and others get access via register
[03:59] <thumper> for the agent to connect over the api
[03:59] <thumper> screw users
[03:59] <thumper> I'm talking about agents
[03:59] <wallyworld> agents have the password elsewhere i thought
[03:59] <wallyworld> but can't recall offhand
[03:59] <thumper> hmm
[03:59] <wallyworld> but gotta run, bbiab
[04:04] <thumper> ah...
[04:04] <thumper> I should just be using APIInfo()
[04:34] <mup> Bug #1525030 changed: 'ERROR while removing instance' when destroying lxd environment <2.0-count> <juju-release-support> <juju-core:Expired> <https://launchpad.net/bugs/1525030>
[04:34] <mup> Bug #1531719 changed: Runaway memory allocation in jujud unit agent <2.0-count> <juju-core:Expired> <https://launchpad.net/bugs/1531719>
[04:43] <mup> Bug #1525030 opened: 'ERROR while removing instance' when destroying lxd environment <2.0-count> <juju-release-support> <juju-core:Expired> <https://launchpad.net/bugs/1525030>
[04:43] <mup> Bug #1531719 opened: Runaway memory allocation in jujud unit agent <2.0-count> <juju-core:Expired> <https://launchpad.net/bugs/1531719>
[04:46] <mup> Bug #1525030 changed: 'ERROR while removing instance' when destroying lxd environment <2.0-count> <juju-release-support> <juju-core:Expired> <https://launchpad.net/bugs/1525030>
[04:46] <mup> Bug #1531719 changed: Runaway memory allocation in jujud unit agent <2.0-count> <juju-core:Expired> <https://launchpad.net/bugs/1531719>
[06:37] <wallyworld> axw: been looking at removing ControllerUUID() from environs.Config(). if it's to be done, it will need to involve constructing the Environ with the controller UUID, which means environs.New() and EnvironProvider.Open()  will change
[07:03] <axw> wallyworld: I think it's only needed by StartInstance and Destroy?
[07:04] <axw> wallyworld: I was thinking we'd just add it to StartInstanceParams, and add a new DestroyAll(controllerUUID string), or something like that
[07:08] <wallyworld> axw: AllInstances needs it as well (for openstack at least)
[07:08] <wallyworld> and likely ControllerInstances() for others
[07:10] <axw> wallyworld: ControllerInstances shouldn't need it, I'm pretty sure it's expected to be called on the controller Environ anyway
[07:10] <wallyworld> ControllerInstances() on openstack needs it
[07:11] <axw> wallyworld: AllInstances shouldn't need the controller UUID either. I think we can change the openstack code
[07:11] <wallyworld> ok, i'll look to see if openstack is the only one where AllInstances needs it
[07:43] <frobware> dimitern: I tried http://reviews.vapour.ws/r/5040/ - LGTM but will test a little more with LXD this morning. If it's all OK I'll dump the distinction between multi and single NIC configs we talked about on Friday
[07:44] <dimitern> frobware: great! thanks
[07:45] <dimitern> frobware: I'm going to remove the legacy AC-FF code today as well
[07:45] <frobware> dimitern: need to sync with CI folks too as tests on their end should vanish
[07:46] <dimitern> frobware: yeah, there's one affected test IIRC
[07:53] <wallyworld> axw: i think ec2 provider is broken for ControllerInstances() - it adds a filter based on model uuid not controller uuid. it may be that we only call it with the controller model environ so it works by fluke
[07:53] <axw> wallyworld: pretty sure that's the only place we ever call it
[07:53] <axw> wallyworld: I think we could reasonably restrict it to that
[07:54] <axw> the caller can arrange to open the correct Environ, rather than requiring more of the provider
[07:54] <wallyworld> axw: yeah, we seem to call it to get an api connection from an env made from bootstrtap config
[07:54] <wallyworld> so i could change openstack to use ModelUUID and it would fix the current issue
[07:55] <wallyworld> but seems quite fragile
[07:55] <wallyworld> i could throw an error if cfg doesn't include a controller uuid attribute
[07:56] <wallyworld> that would stop any inadvertant usage slipping through
[07:56] <axw> wallyworld: isn't the proposal for all configs to be that? :)
[07:56] <wallyworld> not model configs
[07:56] <wallyworld> model configs obtained from state will not include controller uuid
[07:57] <axw> wallyworld: I thought the goal was to remove controller-uuid from environs.Config altogether
[07:57] <wallyworld> and i'd like to totally from controller uuid from from end processing too
[07:57] <wallyworld> yes, but we still need it at the front end at bootstrap
[07:57] <wallyworld> for now
[07:57] <wallyworld> it's a huge change
[07:58] <axw> ok, I'm just saying if we intend to remove it eventually, it doesn't make sense to add more code dependent on its presence (or lack thereof)
[07:58] <wallyworld> but for now, if we just call ControllerInstances() to get api info (as a backup) then we need to assume controller uuid comes from coonfig
[07:58] <wallyworld> i could pass uuid to controller instances
[07:59] <wallyworld> that would work
[07:59] <axw> wallyworld: sure, works for me
[07:59] <wallyworld> yay, more yak shaving
[08:00] <wallyworld> axw: and maas still appears to use the provider state file for ControllerInstances(), so we may be able to clean that up also
[08:01] <axw> wallyworld: eep :/
[08:01] <wallyworld> indeed
[08:01] <axw> wallyworld: I thought thumper was adding the tags support. I guess he didn't get there yet
[08:02] <wallyworld> not enough friday afternoons
[08:02] <axw> heh :)
[08:02] <wallyworld> but maas only *just* got tags support in 2.0
[08:02] <wallyworld> maybe that bit was left out
[08:02] <wallyworld> ie tags are added but that code didn;t get fixed
[08:02] <axw> yep I know, prereq for shortening the names though I thought
[08:02] <axw> actually.. no, because we use agent-name
[08:03] <axw> so just for getting rid of reliance on maas storage
[08:03] <wallyworld> well maas storage is still used for the provider state file
[08:03] <wallyworld> so maybe *now* we will be rid of it
[08:04] <wallyworld> yep, maas is all that uses it
[08:04] <wallyworld> so there'll be a bit deletion coming up
[08:04] <wallyworld> big
[09:03] <voidspace> dimitern: dooferlad: babbageclunk: firefox being slow, omw
[09:04] <voidspace> dimitern: frobware: dooferlad: babbageclunk: I'm in - I can hear but can't see anyone and no icons so I can't unmute yet...
[09:04] <dimitern> frobware: standup?
[09:05] <frobware> omw
[09:05] <babbageclunk> I should really get a headset with a mike - recommendations?
[09:05] <frobware> I keep getting kicked out as soon as it connects
[09:06] <dimitern> frobware: try firefox?
[09:07] <frobware> start without me
[09:08] <voidspace> babbageclunk: http://thewirecutter.com/reviews/best-usb-office-headset/
[09:11] <frobware> dimitern: (et al) can't connect, which is weird since I just had a 1:1 with jam for an hour...
[09:22] <frobware> dimitern: keep on dropping again... :(
[09:22] <dimitern> frobware: are you using chrome?
[09:23] <frobware> dimitern: yep, but tried FF. Just seems anything google-y. gmail and/or hangouts
[09:25] <dimitern> frobware: I've seen this happen on Mondays usually :/
[09:26] <frobware> dimitern: first call was fine; for standup call I had to do the auth dance, now I repeatedly get kicked out
[09:27] <dimitern> frobware: I'd guess signing out completely and removing cookies might help (or try in an incognito window?)
[09:27] <babbageclunk> voidspace: Thanks, those sound good (ha). Have you got one of them?
[09:28] <voidspace> babbageclunk: no, I have some crappy old ones that need replacing
[09:28] <babbageclunk> voidspace: aspirational then
[09:28] <voidspace> babbageclunk: I'll buy the jabra I think
[09:28] <voidspace> babbageclunk: I trust the wirecutter though
[09:29] <babbageclunk> voidspace: I'm probably leaning towards the microsoft ones for better music sound.
[09:30] <voidspace> babbageclunk: ah right - yeah, I don't use them for music
[09:30] <voidspace> babbageclunk: I have 7:1 audio in my man cave
[09:31] <babbageclunk> voidspace: ooh, nice. I assume it's all just ride of the valkyries then.
[09:31] <babbageclunk> voidspace: And a fan to blow your hair back while you code.
[09:31] <voidspace> babbageclunk: hehe, nice idea
[09:53] <mup> Bug #1591939 opened: juju failed unmarshal the /server/{server_id} api  response body <juju-core:New> <https://launchpad.net/bugs/1591939>
[09:53] <mup> Bug #1591940 opened: juju failed unmarshal the /server/{server_id} api  response body <juju-core:New> <https://launchpad.net/bugs/1591940>
[10:52] <babbageclunk> voidspace, dimitern, frobware: I'm trying to add a card to the Sapphire board for the application workload version work (well, the first part of it), but I get a permission error (maybe Leankit thinks I'm not on the team?).
[10:53] <dimitern> babbageclunk: your should have access now
[10:54] <babbageclunk> dimitern: Thanks! I'm looking at your change btw.
[10:54] <dimitern> babbageclunk: thank you :)
[10:56] <babbageclunk> dimitern: Hmm - I can't assign the card I just created to myself.
[10:59] <dimitern> babbageclunk: try reloading lkk ?
[10:59] <babbageclunk> dimitern: Yeah, that's it - thanks again!
[11:00] <dimitern> :)
[11:32] <dimitern> fwereade: hey, do you mind skipping our 1:1 ?
[11:38] <mup> Bug #1401423 changed: networker should handle joyent-specific network config better <compatibility> <joyent-provider> <network> <juju-core:Won't Fix> <https://launchpad.net/bugs/1401423>
[11:38] <mup> Bug #1591962 opened: be able to set juju management network <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1591962>
[11:38] <fwereade> dimitern, np at all, I'd completely forgotten
[11:52] <frobware> dimitern: /etc/network/interfaces.d/eth0.cfg --<<< on MAAS images. I thought that had gone away.
[11:59] <thomnico> Hello team can someone give #1591488 and/or #1591499 some love .. it is blocking preparing demos for next week events with partners .. (we are already late as partners sits on top of openstack)
[11:59] <mup> Bug #1591488: Can not bootstrap on private openstack juju 1.25 or 2.0 <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591488>
[11:59] <mup> Bug #1591499: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
[12:06] <dimitern> frobware: is this on trusty?
[12:06] <mgz> thomnico: those look less like bugs and more like a cry for help
[12:06] <frobware> dimitern: ah yes
[12:07] <thomnico> won't can not bootstrap for days qualify ??
[12:07] <mgz> thomnico: he just doesn't have image metadata set properly
[12:07] <thomnico> it does
[12:07] <dimitern> frobware: is it missing when you disable cloud-init's networking?
[12:07] <thomnico> hence the bug
[12:07] <mgz> thomnico: I can see from the log it's not, falls back to cloud-images.ubuntu.com
[12:07] <frobware> dimitern: don't know because we never do that for a MAAS image
[12:08] <thomnico> yes this IS the buf
[12:08] <thomnico> bug
[12:08] <mgz> thomnico: so, there's certainly some help needed
[12:08] <mgz> thomnico: see bug 1591225 etc
[12:08] <mup> Bug #1591225: Generated image stream is not considered in bootstrap on private cloud <juju-core:Incomplete> <https://launchpad.net/bugs/1591225>
[12:09] <mgz> thomnico: for this one, it seems like image-metadata-url is just not set in environments.yaml or the streams have been incorrectly referenced
[12:09] <thomnico> mgz, SO ??
[12:10] <mgz> which is what cheryl suggested
[12:10] <thomnico> I have been bootstraping juju on openstack for 3 years and helped customer doing it
[12:10] <dimitern> frobware: you're testing my PR? does that eth0.cfg mess things up?
[12:10] <thomnico> expect a recent change it is not a pebkac pb as far as I can tell
[12:10] <mgz> thomnico: we haven't released a new 1.25 recently
[12:11] <thomnico> in 1591488 the cli tells juju to read the metadata and juju ignored it
[12:11] <mgz> thomnico: as far as I can see, image-metadata-url is not set
[12:11] <mgz> it must be.
[12:12] <thomnico> that is new
[12:12] <thomnico> and not documented as far as I can tell
[12:13] <mgz> it's what howto-privatecloud tells you to do
[12:13] <thomnico> once set and checked I have : 1591499
[12:13] <frobware> dimitern: no, was going back to http://reviews.vapour.ws/r/4969/
[12:13] <frobware> dimitern: that it would be painless and quick (ha!)
[12:14] <dimitern> frobware: ah :)
[12:15] <mgz> thomnico: does 192.168.16.5 mean the same thing from the machine being brought up in the private cloud?
[12:15] <mgz> if it can't route there it won't work
[12:15] <thomnico> yes
[12:15] <frobware> dimitern: the problem with switching to ip route commands is we have no unit tests - we didn't before, but we were /only/ calling ifup/down
[12:15] <thomnico> I use a jumpserver and they share the private network
[12:16] <mgz> thomnico: then you're probably into needing to set custom loglevel with trace for juju.environs.simplestreams then
[12:16] <thomnico> I cheked it can be reached from the started VM
[12:17] <dimitern> frobware: we can use the PatchExecutableAsEchoArgs (or the newer PatchExecHelper natefinch did recently)
[12:17] <frobware> dimitern: for?
[12:17] <thomnico> can you copy paste the way to do that ??
[12:17] <frobware> dimitern: oh, tests.
[12:17] <frobware> dimitern: sure
[12:17] <frobware> dimitern: however, the implementation is python.
[12:19] <dimitern> frobware: I guess that will make it easier (at the slight expense of introducing non-go tests :)
[12:19] <frobware> dimitern: we still have the option of going back to ifup/down in light of some understanding around the boot failure
[12:21] <mgz> something like export JUJU_LOGGING_CONFIG="<root>=DEBUG;juju.environs.simplestreams=TRACE"
[12:21] <dimitern> frobware: I don't recall the ifup/down was tested?
[12:21] <frobware> dimitern: it wasn't. my point is that we were doing significantly less
[12:21] <thomnico> thanks mgz will update the bug asap
[12:22] <frobware> dimitern: and was there / is there any value in validating that we call a command ifup/down?
[12:22] <dimitern> frobware: I guess not much
[12:22] <frobware> dimitern: now we have lots of commands with varying args and we don't have much validation that we haven't changed something
[12:23] <dimitern> frobware: but why go back to ifup/down when we know the boot slowdowns and issues ?
[12:23] <mgz> thomnico: remember --keep-broken stops juju taking the bootstrap machine down if it fails
[12:23] <thomnico> great to know too :)
[12:23] <frobware> dimitern: separation of concerns. re my email a week ago: why don't we get the root cause fixed
[12:24] <thomnico> ssh tail -f to local file was working too
[12:24] <thomnico> FYI if it helps I can provide access to the evenrionment
[12:25] <dimitern> frobware: ideally, we'll wait for the root cause to get fixed
[12:25] <dimitern> frobware: however, judging by the comments on that ifupdown bug about restarting networking, it's highly unlikely
[12:26] <thomnico> will need 1h or so to reproduce and collect traces .. cheers,
[12:39]  * dimitern whew... all tests pass again, after removing >7700 lines
[13:03] <redelmann> hi there, anyone from juju resources?, have a short question
[13:03] <redelmann> if anyway to set a default resource in metadata.yaml?
[13:04] <redelmann> i'm looking to set a resource from local charmdir on deploy
[13:07] <redelmann> my python workaround: mypath = resource_get(name) or os.path.join(charm_dir(), name)
[13:24] <dimitern> mgz: ping
[13:27] <mgz> dimitern: yo
[13:28] <dimitern> mgz: hey, I'm about to propose a PR which removes the legacy address allocation code
[13:29] <mgz> okay
[13:29] <dimitern> mgz: once it lands, the CI job on AWS that uses the feature flag will no longer pass
[13:29] <mup> Bug #1591499 opened: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
[13:29] <mgz> I'll mkae it 1.25 only
[13:30] <dimitern> mgz: ok, thanks!
[13:30] <mgz> so, for now aws containers are just not addressable right?
[13:30] <dimitern> yeah
[13:31] <dimitern> with the fan everywhere PoC though, we should get addressable containers by default
[13:32] <mup> Bug #1591499 changed: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
[13:41] <mup> Bug #1591499 opened: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
[13:45] <dimitern> frobware, voidspace, babbageclunk: here it is - http://reviews.vapour.ws/r/5048/, please have a look
[13:46] <frobware> dimitern: do you think there's any value in having a branch to CI test this change?
[13:47] <dimitern> frobware: we can do that, but I doubt it's necessary
[13:48] <frobware> dimitern: sure - just posing the question. And then I read the scrollback with mgz - so seems fine
[13:51] <mgz> frobware: yeah, I just disables/version-selected the ci jobs in advance
[14:03] <dimitern> voidspace: if you can also have a look at http://reviews.vapour.ws/r/5040/ will be great ;)
[14:33] <babbageclunk> fwereade: The service -> application change - do you know if there's still stuff being done on it?
[14:35] <babbageclunk> fwereade: My understanding is it was just made to things that were externally visible (to make it vaguely manageable). If I'm making changes to internal things that are still using the old name, is it reasonable to also convert them to the new name at the same time?
[14:54] <natefinch> dimitern, frobware: where are we with the lxd container networking issues?  my lxc to lxd conversion branch (where we drop lxc support) can't land until we have parity with the old abilities to network lxc containers.
[15:00] <alexisb> natefinch, we will not achieve complete parity give functional changes in netowrking of lxd
[15:01] <alexisb> natefinch, there was one specific bug we needed to address for CI
[15:01] <alexisb> natefinch, let me see if I can find it
[15:02] <fwereade> babbageclunk, yes please, do convert as you can
[15:02] <babbageclunk> fwereade: ok, thanks
[15:03] <mgz> alexisb: dimitern has proposed a change that just pulls out the old feature flag path completely
[15:07] <alexisb> mgz, yep I see it
[15:08] <voidspace> dimitern: ping
[15:09] <katco> redir: standup time
[15:14] <thomnico> hello again folks  Bug #1591499 updated with trace
[15:14] <mup> Bug #1591499: Bootstrap timeout and fail on private cloud <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1591499>
[15:21] <alexisb> katco, redir is sprinting this week I am pretty sure
[15:21] <katco> alexisb: ah ok
[15:25] <dimitern> voidspace: pong
[15:26] <dimitern> natefinch: we have a few known lxd issues, most of which are in progress of getting fixed
[15:28] <dimitern> natefinch: apart from those, I believe we have feature parity with lxc
[15:29] <natefinch> dimitern:b awesome :)
[15:29] <voidspace> dimitern: I'm (still) trying to understand link layer device parent references
[15:30] <voidspace> dimitern: when generating increment ops we attempt to parse the name as a global key first - to see if it's a reference to another machine
[15:30] <voidspace> dimitern: as far as I can tell, parentname is only *ever* set to a device name on the same machine
[15:30] <voidspace> dimitern: am I missing something?
[15:31] <dimitern> voidspace: ParentName can be a allowed to be a global key only in 1 case
[15:31] <voidspace> dimitern: where is the code for that?
[15:31] <voidspace> dimitern: (and what is that case)
[15:31] <voidspace> if it's at all possible I'll have to allow for it I suppose
[15:32] <dimitern> voidspace: when ParentName = "m#42#d#br-eth0" the child device will be e.g. m#42/lxd/0#d#eth1
[15:32] <voidspace> dimitern: right, but where in the code is that parent name generated
[15:33] <voidspace> dimitern: in the machiner it looks like it's only ever set to device name
[15:33] <mup> Bug #1592031 opened: update-clouds does not create public-clouds.yaml if public-clouds.syaml matches compiled-in values <compatibility> <jujuqa> <update-clouds> <juju-core:Triaged> <https://launchpad.net/bugs/1592031>
[15:33] <voidspace> ParentDevice
[15:33] <dimitern> voidspace: assuming machine 42 is the host machine of a container 42/lxd/0, and "eth1" is the container ethernet device connected to the "br-eth0" bridge device on the host machine
[15:33] <dimitern> voidspace: yeah, the only place that uses the above scenario is SetContainerLinkLayerDevices, called in the provisioner
[15:34] <dimitern> voidspace: ParentDevice returns the device itself, if it's set
[15:34] <voidspace> dimitern: ah, the provisioner
[15:34] <voidspace> dimitern: I did miss that - thanks
[15:35] <dimitern> voidspace: yeah, but there are state tests as well
[15:35] <dimitern> (apiserver/provisioner)
[15:37] <dimitern> natefinch: btw my monstrous branch removing the legacy address allocation (http://reviews.vapour.ws/r/5048/) will conflict with your lxc removal branch, but hopefully not too badly, as it's mostly removals
[15:41] <natefinch> dimitern: yeah, we'll see... I don't think it'll be a big deal... probably just resolving you changed it vs. I deleted it, and vice versa.
[15:42] <natefinch> dimitern: between our two branches we deleted like 13,000 lines of code :D
[15:42] <dimitern> natefinch: yeah, and it feels great! :D
[15:43] <natefinch> cmars: is your npipe branch ready to go?
[15:45] <frobware> dimitern: ping, you about? Wanted to catch up regarding fe80:: addrs
[15:45] <dimitern> frobware: yeah
[15:46] <dimitern> frobware: it's not a big deal, I remembered reading about it and though it might be causing some of the issues perhaps
[15:46] <frobware> dimitern: but does it definitively happen?
[15:47] <natefinch> cmars: I'm gonna answer that for you and say no it's not :)  at the very least there's some commented out code in there... ping me when you think it's good to go, or if you need me to take it over and get it to a landable state.
[15:47] <natefinch> alexisb: ^
[15:48] <frobware> dimitern: http://pastebin.ubuntu.com/17293984/
[15:48] <natefinch> cmars: we want to try to get that landed today, so if me taking it over is what it'll take, that's fine.  Just wanted to make sure you don't have local changes or anything.
[15:49] <dimitern> frobware: I haven't t tested it specifically
[15:49] <frobware> dimitern: so that ^^ PB seems to propagate the fe80 addrs
[15:50] <dimitern> frobware: yeah, looks ok
[15:55] <dimitern> babbageclunk: thanks for sending that mail btw
[15:56] <babbageclunk> dimitern: Cheers - wasn't sure what info to put in. It was ok?
[16:04] <rick_h_> dimitern: frobware dooferlad call?
[16:04] <frobware> rick_h_: omw
[16:06] <dimitern> oops, omw
[16:20] <ericsnow> fwereade: ping
[16:32] <dimitern> babbageclunk: you've covered all the important bits I think
[16:32] <babbageclunk> dimitern: cool
[16:33] <dimitern> frobware: so about http://reviews.vapour.ws/r/5040/ ..
[16:33] <dimitern> frobware: should I leave it for tomorrow I guess?
[16:33] <frobware> dimitern: can I sleep on that one; I have a fix for my other review
[16:34] <frobware> dimitern: it may be that it's sensible to land your patch but only with a subset of my patch
[16:35] <frobware> dimitern: sound ok?
[16:35] <dimitern> frobware: ok we can sync tomorrow then; and the other one ? http://reviews.vapour.ws/r/5048/
[16:36] <frobware> dimitern: I would need to review but no real concerns there; we should update the release notes to say it has gone completely now
[16:36] <dimitern> frobware: yeah, good point - I'll add a comment on the docs planing doc
[16:40] <dimitern> ok, I should be going
[17:47] <bdx> hey has anyone hit this yet, or know a way around it -> https://bugs.launchpad.net/juju-core/+bug/1592101
[17:47] <mup> Bug #1592101: Error connecting with cached addresses <juju-core:New> <https://launchpad.net/bugs/1592101>
[17:57] <mup> Bug #1592101 opened: Error connecting with cached addresses <juju-core:New> <https://launchpad.net/bugs/1592101>
[18:04] <perrito666> bbl
[18:09] <mup> Bug #1592101 changed: Error connecting with cached addresses <juju-core:New> <https://launchpad.net/bugs/1592101>
[18:15] <mup> Bug #1592101 opened: Error connecting with cached addresses <juju-core:New> <https://launchpad.net/bugs/1592101>
[19:34] <mup> Bug #1588574 changed: Session already closed in state/presence <blocker> <ci> <intermittent-failure> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1588574>
[19:34] <mup> Bug #1590065 changed: container/lxd: One rename too far -> "application", "restart", "lxd-bridge" <regression> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1590065>
[19:43] <mup> Bug #1588574 opened: Session already closed in state/presence <blocker> <ci> <intermittent-failure> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1588574>
[19:43] <mup> Bug #1590065 opened: container/lxd: One rename too far -> "application", "restart", "lxd-bridge" <regression> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1590065>
[19:46] <mup> Bug #1588574 changed: Session already closed in state/presence <blocker> <ci> <intermittent-failure> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1588574>
[19:46] <mup> Bug #1590065 changed: container/lxd: One rename too far -> "application", "restart", "lxd-bridge" <regression> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1590065>
[20:00] <alexisb> natefinch, in our 1x1 hangout when you are ready
[20:01] <natefinch> alexisb: omw
[20:03] <thumper> fwereade: call?
[20:06] <fwereade> thumper, joining
[20:15] <natefinch> ahh, tests running against mongodb on windows.  my favorite
[20:26] <natefinch> sinzui, mgz: is there a trick to getting the windows tests to run?  I get a million of these errors:   cannot replace tools directory: cannot update tools symlink: rename C:\\Users\\Nate\\AppData\\Local\\Temp\\check-6656273794189705274\\365\\var\\lib\\juju\\tools\\tmpfilea49d086a-99df-452f-8dda-3a7f0bd811b8 C:\\Users\\Nate\\AppData\\Local\\Temp\\check-6656273794189705274\\365\\var\\lib\\juju/tools/machine-0: The system cannot find the file
[20:26] <natefinch> specified.
[20:26] <natefinch> mgz, sinzui: (this is the cmd/jujud/agent tests, for reference
[20:30] <natefinch> ahh... I think the answer is "run the tests as administrator"
[20:38] <frobware> dooferlad: if you're about can you take a peek at my updates to http://reviews.vapour.ws/r/4969/
[20:40] <dooferlad> sorry frobware: not interruptible right now
[20:46] <mup> Bug #1592155 opened: restore-backup fails when attempting to 'replay oplog'. <juju-core:New> <https://launchpad.net/bugs/1592155>
[21:43] <thumper> davechen1y: this is the bug you are working on yes? https://bugs.launchpad.net/juju-core/+bug/1588143
[21:43] <mup> Bug #1588143: cmd/juju/controller: send on a closed channel panic <blocker> <race-condition> <juju-core:Triaged> <https://launchpad.net/bugs/1588143>
[21:47] <perrito666> this error is completely nonsensicall: state changing too quickly; try again soon
[21:48] <perrito666> what is the user going to do to try that again soon?
[21:48] <perrito666> and what is more, state is not changing too quickly, the assertion is just plain broken

[21:51] <wallyworld> natefinch: how goes the lxc->lxd branch landing?
[22:02]  * perrito666 ponders learning to crochet something while running state tests
[22:13] <thumper> cherylj: the api explicit tagging branch landed
[22:15] <natefinch> wallyworld: I think I'm still blocked on the networking stuff. It's a little unclear.  I made a fix friday so we'd treat lxc containers defined in bundles as lxd.  That was easy, thankfully.  Worked on the npipe bug today: https://bugs.launchpad.net/juju-core/+bug/1581157  there's a fix proposed, but it needed some major work, so I've been working on that.  will try to get it landed tonight.
[22:15] <mup> Bug #1581157: github.com/juju/juju/cmd/jujud test timeout on windows <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Triaged by dave-cheney> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1581157>
[22:16] <wallyworld> natefinch: i'm told this morning that christian had fixed the bug already?
[22:16] <wallyworld> so we need to land the work if we can if we are sure the bug is fixed
[22:17] <natefinch> wallyworld: oh, I may have been confused
[22:18] <wallyworld> it was apparently a network related issue
[22:18] <wallyworld> and the saphire guys were onto it, or that's my understanding
[22:18] <wallyworld> so may just need a retest to confirm and then you can land
[22:34] <alexisb> natefinch, you have a green light to land
[22:36] <natefinch> alexisb: thanks
[22:36] <natefinch> alexisb: rebased and doing some tests now
[22:41] <natefinch> I guess it works?
[22:41] <natefinch> are we supposed to be able to get to services deployed inside lxd from the outside?
[22:42] <natefinch> because, this is not super promising: public-address: 10.0.4.21
[22:43] <mup> Bug #1592179 opened: Juju2 with MAAS2, log shows errors after having created a controller <juju-core:New> <https://launchpad.net/bugs/1592179>
[22:46] <natefinch> but I dont' think that worked with lxc either, so I guess that's fine
[22:56] <natefinch> landing now
[22:57] <perrito666> you shouldn't fly and chat :p
[22:57] <natefinch> heh
[22:57] <wallyworld> rick_h_: did we need to catch up about settings?
[22:57] <natefinch> afk for a bit, putting kids to bed
[23:02] <bdx> hey whats up everyone? - Quick question on the manual provider .... is `juju enable-ha` supported for manual provider?
[23:13] <natefinch-afk> bdx: I think you can do it with add-machine and then juju ensure-availability --to
[23:13] <natefinch-afk> bdx: I haven't tried it though
[23:13] <natefinch-afk> bdx: no reason it shouldn't work, in theory
[23:14] <bdx> natefinch-afk, yeah, accept ensure-availability isn't a command anymore ... its now 'enable-ha'
[23:14] <natefinch-afk> bdx: oh, I was looking at juju1.. for 2, not sure
[23:15] <bdx> yea ... I don't think the functionality is there ....
[23:15] <perrito666>   wallyworld anastasiamac axw redir brt
[23:15] <wallyworld> ok
[23:16] <wallyworld> bdx: the only difference in 2.0 is that the command has been renamed. enable-ha --to shoudl work for manual
[23:17] <bdx> wallyworld: yea, even after I add a machine to my 'controller' model, so I have 0 and 1 ... I run 'juju enable-ha --to 1' and get -> ERROR failed to create new controller machines: use "juju add-machine ssh:[user@]<host>" to provision machines
[23:17] <wallyworld> bdx: you need 3 or 5 or7 etc machines for ha
[23:18] <axw> --to 1,2 I think?
[23:18] <wallyworld> so you need to add at least 2 additional machines
[23:18] <bdx> ooooh, let me try
[23:19] <bdx> sick
[23:19] <bdx> wallyworld: good call
[23:20] <wallyworld> yay
[23:23] <bdx> natefinch-afk, wallyworld: thanks!
[23:45] <rick_h_> wallyworld: sure, got time?
[23:46] <wallyworld> rick_h_: join us in https://hangouts.google.com/hangouts/_/canonical.com/tanzanite-stand if you want
[23:46] <rick_h_> omw