[01:32] <davecheney> does anyone know where the code that fakes out the simple streams is ?
[01:32] <davecheney> [LOG] 0:00.023 DEBUG juju.environs.simplestreams read metadata index at "file:///tmp/check-6283502795135149108/15/tools/streams/v1/index2.sjson"
[01:32] <davecheney> ^ the one that generates this local file
[01:32] <davecheney> for some reason the ec2 tests are different to all the other providers
[01:42] <thumper> nope
[01:42] <thumper> not me
[02:09] <anastasiamac> davecheney: ToolsFixture?..
[02:10] <davecheney> anastasiamac: thanks
[02:10] <davecheney> i'll try to figure out where that is hooked up
[02:10] <davecheney> and why it is special in the ec2 provider tests
[02:16] <anastasiamac> davecheney: i think test roundtripper is what delivers this stuff in tests :D
[02:16] <anastasiamac> davecheney: have fun
[02:35] <davecheney> oh boy, i think i've crackd it
[02:36] <davecheney> fixed the ec2 test
[02:36] <davecheney> imagetesting "github.com/juju/juju/environs/imagemetadata/testing" is the magic import
[02:44] <davecheney> menn0: anastasiamac thumper https://github.com/juju/juju/pull/4957
[02:44] <davecheney> could I get a second look
[02:44] <davecheney> this change is bigger than it started because upgrading the testing dependency hit a lot of places
[02:44] <davecheney> i'm 90% confident that I've backported all the AddCleanup fixes from master
[02:45] <davecheney> i'd be more confident, but the local tests haven't finished running for me
[02:49] <natefinch> man that embedded test suite stuff is wicked error prone
[02:52] <thumper> davecheney: hmm... what changed with the AddSuiteCleanup code?
[02:52] <thumper> I have vague recollections...
[02:57] <natefinch> thumper: I believe it's just that there's no more separation between test cleanup and suite cleanup. AddCleanup does the right thing depending on when it's called
[02:58] <natefinch> thumper: since we found there were placing we were calling the wrong one, and it was causing problems.
[02:58]  * thumper nods
[02:59] <natefinch> thumper: https://github.com/juju/testing/blob/master/cleanup.go#L59
[03:04] <davecheney> thumper: there are a few changes here
[03:04] <davecheney> 1. updaed the testing dependency to spot suite mistakes
[03:04] <davecheney> 2. updated testing itself to remove the deprecated AddSuiteCleanup method
[03:04] <davecheney> this is already comitted to master
[03:04] <davecheney> I backported this fix to 1.25
[03:04] <davecheney> then adjusted the code to avoid calling AddSuiteCleanup
[03:05] <davecheney> and backported all of jam's fixes for various suite failures to 1.25
[03:08] <thumper> davecheney: lgtm
[03:08] <mup> Bug #1566024 changed: The juju GCE error message references the wrong key name <juju-core:New> <https://launchpad.net/bugs/1566024>
[03:53]  * thumper grumbles
[04:47] <mup> Bug #1564163 changed: environment name in credentials file is not a tag <juju-core:Invalid> <juju-core (Ubuntu):Invalid> <https://launchpad.net/bugs/1564163>
[04:47] <mup> Bug #1564165 changed: Credentials file displays unhelpful message for syntax errors <juju-core:New> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1564165>
[04:47] <mup> Bug #1566130 opened: awaiting error resolution for "install" hook <juju-core:Triaged> <https://launchpad.net/bugs/1566130>
[05:15] <davecheney> func (s *UpgradeSuite) getAptCmds() []*exec.Cmd { s.aptMutex.Lock() defer s.aptMutex.Unlock() return s.aptCmds
[05:15] <davecheney> }
[05:15] <davecheney> ^ note, doesn't actaully prevent a race
[05:15] <davecheney> unless you're only appending to that slice
[05:15] <davecheney> , maybe
[05:22] <menn0> anastasiamac: review please http://reviews.vapour.ws/r/4431/
[05:27] <anastasiamac> menn0: looking \o/
[06:50] <davecheney> dummy needs some love http://reviews.vapour.ws/r/4433/
[07:46] <rogpeppe> davecheney: FWIW I think the "p := &providerInstance" thing was just to make it more convenient to use. There's nothing wrong about it per se.
[07:49] <rogpeppe> anyone know if dimitern is gonna be around today?
[07:49] <frobware> rogpeppe: tomorrow
[07:51] <rogpeppe> frobware: i'm interested to inquire about an apparent networking bug with multiple models in a controller. do you know who else might know about the networking stuff?
[07:52] <frobware> rogpeppe: try us "me, dooferlad, voidspace"
[07:53] <rogpeppe> frobware, dooferlad, voidspace: ok, so we got a controller to start an instance in another model (with different provider creds) yesterday, and the machine was network-isolated from the controller
[07:54] <rogpeppe> frobware, dooferlad, voidspace: i.e. it couldn't connect to the API server or ping the controller machine
[07:54] <rogpeppe> frobware, dooferlad, voidspace: this was in juju 1.25.4
[07:54] <rogpeppe> frobware, dooferlad, voidspace: i was hoping this might be a known issue that's been fixed in 2.0
[07:56] <frobware> rogpeppe: when you network-isolated, by how much? at the risk of stating the obvious, no network connectivity ...
[07:56] <rogpeppe> frobware: we could ssh to it
[07:56] <rogpeppe> frobware: (only directly, but that's another bug :-\)
[07:57] <rogpeppe> frobware: i didn't test whether it could dial out to the outside world
[07:57] <frobware> rogpeppe: still a bit confused. is one instance running 1.25?
[07:58] <rogpeppe> frobware: everything was running 1.25
[08:00] <frobware> rogpeppe: otp, back in a bit...
[09:02] <dooferlad> voidspace, fwereade, jam: hangout?
[09:03] <voidspace> dooferlad: omw
[09:12] <voidspace> dooferlad: frobware: review please http://reviews.vapour.ws/r/4425/
[09:15] <fwereade> http://reviews.vapour.ws/r/4403/diff/1/?file=324207#file324207line52
[09:22] <fwereade> http://reviews.vapour.ws/r/4342/
[09:31] <voidspace> babbageclunk: hey, hi
[09:31] <babbageclunk> voidspace: hi!
[09:34] <frobware> babbageclunk: should be in the office around 10am tomorrow... trains permitting.
[09:34] <babbageclunk> voidspace: hmm, my irc connection died because I've dropped off ethernet, and the wifi is a bit spotty in the office.
[09:34] <voidspace> babbageclunk: ok
[09:34] <babbageclunk> frobware: great! I'll be in well before then.
[09:34] <voidspace> frobware: so, will you and babbageclunk be pairing tomorrow? (and the rest of the week?)
[09:34] <voidspace> frobware: sounds like a good plan
[09:36] <babbageclunk> voidspace: oh nice, restarting network-manager doesn't actually drop the connection.
[09:36] <frobware> babbageclunk, voidspace: I guess... I think there's some general stuff to go over. There's also the bug I'm looking at. And there's a couple of bugs to shift to dimiter. But, yes, in principle...
[09:36] <voidspace> ah yes, dimiter returns tomorrow - yay
[09:37] <babbageclunk> voidspace, frobware: we should try to get to a point where there are parallelisable bits of work to do on the maas provider.
[09:38] <frobware> babbageclunk: yep. and I'm going to need an intro to what's been done too. :)
[09:38] <voidspace> babbageclunk: frobware: it's easily parallelisable now
[09:39] <voidspace> babbageclunk: frobware: we need Instances, AvailabilityZones and acquireNode implementing for MAAS 2
[09:39] <voidspace> those are the next steps, all can be tackled separately and all already have support in gomaasapi
[09:39] <voidspace> we have the basics of test infrastructure in place too
[09:40] <voidspace> frobware: babbageclunk *should* be able to show you what we've done
[09:40] <babbageclunk> voidspace, frobware: I can have a go
[09:40] <voidspace> frobware: it's pretty straightforward - the diff of maas2 against master isn't too huge and shows it
[09:40] <voidspace> babbageclunk: frobware: we could topic it in standup
[09:40] <voidspace> or just a hangout
[09:41] <voidspace> it's actually it took us a week to get here, but the code isn't hard to understand I don't think
[09:41] <voidspace> nor is the path ahead
[09:41]  * frobware needs to sort his craptop out...
[09:42] <voidspace> :-)
[09:44] <voidspace> oh, we'll need Spaces too
[09:45] <voidspace> we also have the endpoint for that already written
[10:06] <frobware> dooferlad: can I steal some of your time? h/w too. :)
[10:06] <dooferlad> frobware: sure. 2 mins.
[10:07] <frobware> dooferlad: 5. coffee.
[10:07]  * fwereade amusing typo: synchorinses for synchronises
[10:12] <frobware> dooferlad: ready whenever works for you. In standup HO.
[10:39] <mup> Bug #1566237 opened: juju ssh doesn't work with multiple models <juju-core:New> <https://launchpad.net/bugs/1566237>
[10:54] <jam> frobware: were you investigating bug #1565461
[10:54] <mup> Bug #1565461: deploy Ubuntu into an LXD container failed on Xenial <lxd> <juju-core:Triaged> <juju-core maas-spaces-multi-nic-containers:New> <juju-core maas-spaces-multi-nic-containers-with-master:New> <https://launchpad.net/bugs/1565461>
[10:54] <jam> ?
[10:55] <frobware> jam: not actively. sidetracked by bug #1565644
[10:57] <frobware> jam: but I added a comment to bug #1565461 just in case it was significant.
[10:57] <mup> Bug #1565461: deploy Ubuntu into an LXD container failed on Xenial <lxd> <juju-core:Triaged> <juju-core maas-spaces-multi-nic-containers:New> <juju-core maas-spaces-multi-nic-containers-with-master:New> <https://launchpad.net/bugs/1565461>
[10:57] <jam> k
[10:57] <frobware> jam: I haven't looked to see if the failure on xenial is related to juju not creating any network devices for the container
[10:58] <jam> frobware: fwiw I saw it on Trusty as long as you --upload-tools from Master.
[10:58] <jam> frobware: it might be the same bug, I didn't get a ip addr show from inside the container when I tested last.
[10:58] <frobware> jam: and there did you take a look at the LXD network profile that gets created?
[10:59] <jam> frobware: I'll go bootstrap now and do some debugging. I'll let you know when it is up and running.
[10:59] <frobware> jam: first pass validation since we did the multi nic support would be to check the LXD profile
[11:30] <jam> frobware: well I would be testing but it seems jujucharms.com is broken right now.
[11:31] <frobware> jam: do you need a charm? can you just add-machine lxd:0 in this case?
[11:35] <jam> frobware: fair point
[11:45] <voidspace> fwereade: ping
[11:45] <voidspace> fwereade: we're fixing a theoretical panic case in maasEnviron.Instances (that we hit in testing our new implementation for MAAS 2)
[11:46] <voidspace> fwereade: in the case that MAAS returns instances with different ids to the ones you requested you can get a slice of nil instances back
[11:46] <voidspace> fwereade: because the code that builds the map of id -> instance doesn't check the id is actually in the map when fetching them back
[11:47] <voidspace> fwereade: fixing that to return ErrPartialInstances when an id you requested isn't returned causes an existing test to fail
[11:48] <voidspace> fwereade: because it assumes that even in the case of an error return that the returned partial results will be present
[11:48] <voidspace> hmm... we've found a better way that makes this a non issue I think
[11:48] <mup> Bug #1566268 opened: poor error when "jujucharms.com" is down <error-reporting> <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1566268>
[11:48] <mup> Bug #1566271 opened: It is hard to open juju API if you're not implementing a ModelCommand <juju-core:New> <https://launchpad.net/bugs/1566271>
[12:03] <jam> frobware: so I see a "juju-machine-0-lxd-0-network' profile that contains nothing
[12:03] <jam> the machine itself seems to only have a "lo"
[12:03] <jam> but somehow it came up correctly...
[12:04] <jam> frobware: ok, no it did not come up correctly
[12:05] <jam> machine-status is "running" but juju-status is "pending"
[12:05] <frobware> jam: can you try bouncing the node and adding a new container
[12:05] <jam> frobware: so it does seem to be bug #1564395
[12:05] <mup> Bug #1564395: newly created LXD container has zero network devices <bootstrap> <network> <juju-core:Triaged by frobware> <https://launchpad.net/bugs/1564395>
[12:05] <jam> frobware: restarting the host or just the container?
[12:05] <frobware> jam: yep, was suspicious (to me at least).
[12:05] <frobware> jam: just the node hosting the container
[12:06] <frobware> jam: the first container will still fail but I'm expecting subsequent containers to work correctly
[12:06] <jam> frobware: are we not detecting networking correctly without a reboot?
[12:07] <frobware> jam: bug introduced post March 21st... is as far as I got.
[12:07] <frobware> jam: sometimes... (rarely) we get eth0 added to the container. so, a timing issue IMO
[12:12] <jam> frobware: juju-machine-0-lxd-1-network is *also* empty
[12:13] <frobware> jam: sigh
[12:15] <frobware> jam: let me try again
[12:15] <jam> frobware: this is with a Trusty controller, but we'll want it to work there ,too.
[12:15] <frobware> jam: my tip commit is probably behind master (a084e423e0586d2348963e6ba91aa3d2454997dd)
[12:16] <frobware> jam: any chance you could validate against xenial?
[12:16] <jam> 23 commits
[12:16] <jam> frobware: sure
[12:16] <jam> frobware: any reason to keep trusty around?
[12:17] <frobware> jam: not for me. :)
[12:18] <frobware> jam: just bootstrapping, back in 10
[12:21] <voidspace> jam: who does babbageclunk ping to get added to the juju team calendar?
[12:21] <jam> frobware: bootstrap is the new complie step?
[12:21] <jam> voidspace: I believe all team leads should be admin, but I can go do i
[12:21] <jam> it
[12:21] <voidspace> jam: thanks!
[12:23] <jam> voidspace: he should be able to add items now
[12:23] <jam> have babbageclunk check to make sure I did it right
[12:25] <frobware> jam: heh, I bootstrapped to a node which ... will fail ... because that's configured to try and fix another issue ...
[12:25] <jam> apt-get is a bit slow today
[12:26] <frobware> jam: swings. roundabouts. repeat.
[12:26] <frobware> jam: and then I run into: ERROR some agents have not upgraded to the current model version 2.0-beta4.1: machine-0-lxd-0
[12:26] <frobware> jam: wow, today. ffs.
[12:26] <jam> not everything got to the broken version so you can't upgrade to the fixed one...
[12:26] <jam> frobware: maybe you can 'juju destroy-machine --force'
[12:27] <jam> ?
[12:27] <frobware> jam: I upgraded to MAAS 1.9.1 and no DNS running there anymore...
[12:27] <jam> frobware: no DNS at all? or they just moved where DNS is running?
[12:27] <frobware> jam: not sure if my 1.9.0 > .1 borked maas-dns. Either way named is not running anymore.
[12:28] <frobware> jam: could be because I futzed with my MTU setting to try and fix bug #1565644
[12:29] <jam> frobware: xenial is up and running
[12:29] <jam> trying now
[12:30] <frobware> jam: lxd-images is no longer a thing... ?
[12:31] <jam> frobware: no, "lxc image ubuntu:"
[12:31] <jam> frobware: but juju should handle those for you
[12:31] <jam> but you can do"
[12:31] <jam> "lxc launch ubuntu:trusty"
[12:31] <jam> it reads simplestreams directly
[12:32] <jam> frobware: on xenial juju-machine-0-lxd-0-network is empty
[12:32] <jam> trying reboot and 0/lxd/1
[12:32] <frobware> fingers crossed for at least one thing to kind-of work today.
[12:33] <frobware> I have too many things which are broken.
[12:35] <fwereade> voidspace, sorry!
[12:35] <fwereade> voidspace, I think ErrPartialInstances is a bit different
[12:36] <frobware> jam: whee... error: Get https://cloud-images.ubuntu.com ....... i/o timeout.
[12:36] <fwereade> voidspace, if maas tells us extra instances, that is annoying, but I think the reactions there are either to ignore or to go nuclear -- maas is making no sense, give back no instances and ErrMaasInsane
[12:37] <jam> frobware: juju-machine-0-lxd-0 is shown as being on the lxdbr0 bridge
[12:37] <jam> with an address
[12:38] <fwereade> voidspace, if we ignore that, which is reasonable, we return the instances we got (so long as we got at least one we asked for) and ErrPartialInstances if any are missing
[12:38] <frobware> jam: is it possible my current tip is no longer compatible with lxd as installed on xenial?
[12:38] <jam> frobware: well, the agent still didn't come up for 0-lxd-0
[12:39] <jam> and cloud-init-output.log looks very truncated
[12:39] <jam> checking if 0-lxd-1 comes up
[12:39] <frobware> jam: I cannot import any images
[12:39] <jam> frobware: 0-lxd-1 comes up with the *same* IPV4 address as 0-lxd-0 not a great sign
[12:39] <jam> 10.0.3.1 for both
[12:40] <frobware> jam: well, I knew it was a bug... just not working on it atm... :(
[12:40] <jam> frobware: juju-machine-0-lxd-1-network is also empty
[12:48] <frobware> jam: trying with tip of master now
[12:48] <frobware> jam: my maas named conf had duplicate entries
[12:49] <frobware> jam: (not sure how!)
[12:49] <voidspace> fwereade: I think we've fixed it in a sane way
[12:49] <fwereade> voidspace, cool
[12:49] <voidspace> fwereade: I like the idea of ErrMaasInsane
[12:50] <voidspace> fwereade: although the tempation would be just to return that for everything
[12:50] <fwereade> haha
[12:54] <voidspace> fwereade: on the maas2 work bootstrap now gets into StartInstance, which is nice tangible progress
[12:56] <voidspace> frobware: dooferlad: if you get a chance we now have two PR ready: http://reviews.vapour.ws/r/4425/
[12:57] <voidspace> frobware: dooferlad: plus one that didn't make it onto reviewboard (yet?) https://github.com/juju/juju/pull/4995
[12:57] <voidspace> ericsnow: this PR hasn't appeared on reviewboard: https://github.com/juju/juju/pull/4995
[13:01] <frobware> jam: so I'm not going entirely bonkers: http://pastebin.ubuntu.com/15629100/
[13:02] <frobware> jam: but there's a new issue now. /e/n/i in the container is not what it used to be.
[13:02] <voidspace> frobware: you coming to the maas meeting?
[13:02] <frobware> jam: http://pastebin.ubuntu.com/15629133/
[13:02] <frobware> voidspace: nope. too much entropy.
[13:02] <voidspace> frobware: sure
[13:03] <frobware> voidspace: I can if needed, but I guess you'll have more to discuss anyway
[13:04] <voidspace> frobware: we're fine
[13:05] <voidspace> frobware: and we're done
[13:05] <voidspace> babbageclunk: so, lunch
[13:06] <frobware> jam: https://bugs.launchpad.net/juju-core/+bug/1564395/comments/3
[13:06] <mup> Bug #1564395: newly created LXD container has zero network devices <bootstrap> <network> <juju-core:Triaged by frobware> <https://launchpad.net/bugs/1564395>
[13:18] <mup> Bug #1566303 opened: uniterV0Suite.TearDownTest: The handle is invalid <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1566303>
[13:26] <fwereade> voidspace, oops, missed that too: awesome!
[13:42] <perrito666> aghh this kill controller thing is ridiculous
[13:43] <rick_h_> perrito666: what now?
[13:44] <perrito666> I bootstraped an lxd controller yesterday and now I cannot destroy it, and I am pretty sure nothing changed betweenyesterday and today
[13:44] <rick_h_> perrito666: :/
[13:45]  * perrito666 takes the killbyhand hammer
[13:46] <perrito666> its a good thing I actually love lxd so I dont mind playing with it
[13:47] <jcastro_> I had to kill the containers from underneath it
[13:48] <perrito666> apparently the lxc container is not there
[13:48] <jcastro_> and then blow away most of ~.local/config/juju
[13:48] <perrito666> but juju thinks otherwise
[13:48] <jcastro_> yeah, look in ~/.local/config/juju
[13:48] <jcastro_> I removed everything but my creds from there and that got juju to stop lying to itself
[13:49] <perrito666> jcastro_: that IS a bug though
[13:49] <jcastro_> oh I agree 100%
[13:49] <perrito666> that is exactly what kill controller should do
[13:49] <perrito666> I am worried this goes beyond the current ongoing discussion about what is not working in kill controller
[13:50] <mbruzek> lxc list | grep -E 'juju-(.*)-machine(.*)' | awk '{print $2}' | xargs lxc stop
[13:50] <perrito666> I am extra worried that this did not blow in someone' s face in CI
[13:50] <perrito666> mbruzek: lxc says no running container
[13:50] <mbruzek> lxc list | grep -E 'juju-(.*)-machine(.*)' | awk '{print $2}' | xargs lxc delete
[13:50] <perrito666> its clearly juju
[13:50] <mbruzek> perrito666: Then juju needs to be more forceful when I tell it to KILL
[13:52] <mbruzek> perrito666: This happened to me yesterday which is why I had those commands in my history.
[13:52] <mbruzek> perrito666: I could not kill-controller it just sat there in a loop
[13:52] <perrito666> mbruzek: tx anyway, I suspect this are going to be useful to me soon
[13:53] <mbruzek> perrito666: I also helped out one of our partners (IBM) who was having problems with the local lxd provider
[13:54] <mbruzek> yesterday
[13:56] <katco> perrito666: mbruzek: if it's not listed here, please file a bug: https://blueprints.launchpad.net/juju-core/+spec/charmer-experience-lxd-provider
[13:57] <jcastro_> oooh, can we add a bug to this?
[13:58] <mbruzek> katco: https://bugs.launchpad.net/juju-core/+bug/1565872
[13:58] <mup> Bug #1565872: Juju needs to support LXD profiles as a constraint <adoption> <juju-release-support> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1565872>
[13:58] <katco> jcastro: sure... do you have a "link a bug report" button or is that just the owner?
[13:58] <jcastro_> I appear to not have a link button
[13:58] <jcastro_> but it's the bug mbruzek just linked
[13:58] <katco> jcastro_: k np just toss me the #
[13:59] <katco> mbruzek: linked your bug
[14:00] <mbruzek> Thanks katco
[14:09] <cherylj> heh, it feels like we need a --force flag for kill-controller :)
[14:09] <cherylj> I've also hit the "kill spins in a loop" error.
[14:09] <natefinch> heh
[14:09] <cherylj> maybe if it doesn't complete in a certain amount of time, it falls back to just destroying through the provider
[14:10] <cherylj> ?
[14:12] <perrito666> well that is actually what it should be doing
[14:15] <cherylj> perrito666: yeah, I'm going to open a bug
[14:15] <lazyPower> not to dogpile on, it appears kill-controller is also not removing storage, nor is it removing units which had storage volumes attached at the time of issuing kill-controller. Working on getting a bug for you about this now
[14:15] <perrito666> bbl
[14:16] <mup> Bug #1565991 changed: juju commands don't detect a fresh juju 1.X user and helpfully tell them where to find juju 1.X <juju-core:Triaged> <https://launchpad.net/bugs/1565991>
[14:16] <mup> Bug #1564622 opened: Suggest juju1 upon first use of juju2 if there is an existing JUJU_HOME dir <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1564622>
[14:16] <mup> Bug #1566332 opened: help text for juju remove-credential needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1566332>
[14:16] <mup> Bug #1566339 opened: `juju run` needs a --all-machine, --all-service, --all-unit <juju> <machine> <run> <juju-core:New> <https://launchpad.net/bugs/1566339>
[14:16] <cherylj> hey jam, I see you made this statement in a PR:  "We're currently broken on AWS which makes it hard for me to evaluate this" - what's going on with AWS?
[14:21] <frobware> cherylj: at a guess, related to https://launchpad.net/bugs/1564395
[14:21] <mup> Bug #1564395: newly created LXD container has zero network devices <bootstrap> <network> <juju-core:Triaged by frobware> <https://launchpad.net/bugs/1564395>
[14:22] <cherylj> hmm, fun
[14:23] <frobware> cherylj: the trouble is nobody working on it. :(
[14:23] <frobware> cherylj: although I was but now preempted
[14:24] <katco> frobware: alas, we are interrupt driven :|
[14:29] <natefinch> katco: crap, gotta help my wife with our accountant.... she's there now and needs my help.  I'll almost certainly miss the standup.  Sorry
[14:29] <natefinch> katco: I should be back by 15-30 minutes after standup is scheduled to start, though.
[14:30] <katco> natefinch: let's talk when you get back
[14:30] <natefinch> katco: yep
[14:34] <mup> Bug #1566345 opened: kill-controller leaves instances with storage behind <juju-core:New> <https://launchpad.net/bugs/1566345>
[14:40] <mup> Bug #1566345 changed: kill-controller leaves instances with storage behind <juju-core:New> <https://launchpad.net/bugs/1566345>
[14:41] <frobware> dooferlad: ping - any chance of using one of your NUCs? If not I'll look elsewhere
[14:43] <rick_h_> frobware: you need a nuc? what else do you need around it?
[14:43] <rick_h_> frobware: there's two free in http://maas.jujugui.org/MAAS/#/nodes
[14:44] <frobware> rick_h_: ability to change the MTU for jumbo frames
[14:44] <frobware> rick_h_: might affect the rest of your network :-D
[14:44] <frobware> rick_h_: needs two physical NICs
[14:45] <babbageclunk> jam: yup, adding me on the calendar worked - thanks!
[14:46] <frobware> rick_h_: can I grab an account on there anyway?
[14:46] <mup> Bug #1566345 opened: kill-controller leaves instances with storage behind <juju-core:New> <https://launchpad.net/bugs/1566345>
[14:46] <rick_h_> frobware: sure thing, what's your LP username
[14:47] <frobware> rick_h_: frobware
[14:48] <frobware> rick_h_: if it has two NICs I'll try anyway
[14:48] <rick_h_> frobware: yes
[14:48] <rick_h_> frobware: sec
[14:52] <cherylj> hey lazyPower, regarding bug #1566345 - are you getting errors that your rate limit has been exceeded?
[14:52] <mup> Bug #1566345: kill-controller leaves instances with storage behind <juju-core:New> <https://launchpad.net/bugs/1566345>
[14:52] <lazyPower> cherylj - negative, i think what happened is the volumes weren't unmounted during the storage-detaching hook (we haven't implemented this) and everything was left behind
[14:52] <cherylj> lazyPower: I'm wondering if your bug is another side effect of bug 1537620
[14:52] <lazyPower> but no log errata regarding rate limits
[14:52] <mup> Bug #1537620: ec2: destroy-controller blows the rate limit trying to delete security group - can leave instances around <2.0-count> <ci> <jujuqa> <juju-core:Triaged> <https://launchpad.net/bugs/1537620>
[14:52] <cherylj> lazyPower: ah, ok
[15:01] <katco> ericsnow: standup time
[15:03] <voidspace> natefinch-taxes: are you here today, or are you too heavily taxed?
[15:03] <voidspace> natefinch-taxes: you're OCR :-)
[15:10] <katco> voidspace: he's here
[15:10] <katco> voidspace: will be back momentarily
[15:11] <voidspace> katco: cool, thanks
[15:11] <voidspace> ericsnow: ping
[15:11] <ericsnow> voidspace: hey
[15:11] <ericsnow> voidspace: thanks for getting that poster stuff sorted out
[15:12] <voidspace> ericsnow: no problem
[15:12] <voidspace> ericsnow: we have a PR that didn't make it's way onto reviewboard
[15:14] <voidspace> ericsnow: https://github.com/juju/juju/pull/4995
[15:14] <ericsnow> voidspace: k
[15:14] <voidspace> ericsnow: looks like you're lumbered with reviewboard issues for life... :-)
[15:14] <ericsnow> voidspace: mwahaha
[15:16] <mup> Bug #1566362 opened: help text for juju add-credential needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1566362>
[15:16] <mup> Bug #1566367 opened: help text for juju upgrade-juju needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1566367>
[15:16] <mup> Bug #1566369 opened: help text for juju ssh needs improving <helpdocs> <juju-core:New> <https://launchpad.net/bugs/1566369>
[15:16] <cherylj> frankban: you still around?
[15:19] <mgz> voidspace: that one is almost certainly just that the guy isn't in the right github group
[15:19] <voidspace> mgz: I don't think so
[15:19] <voidspace> mgz: his other PR worked fine
[15:20] <frankban> cherylj: I am, on call
[15:20] <mgz> hm, I take it back, is in hackers and did set public
[15:20] <mgz> eric's up then :P
[15:20] <cherylj> frankban: np, ping me when you get a minute, please :)
[15:20] <frankban> cherylj: sure
[15:21] <cherylj> thanks!
[15:29] <cherylj> ericsnow: thanks for fixing bug 1560201!
[15:29] <mup> Bug #1560201: The Client.WatchAll API command never responds when the model has no machines <2.0-count> <juju-release-support> <kanban-cross-team> <landscape> <juju-core:Fix Committed by ericsnowcurrently> <https://launchpad.net/bugs/1560201>
[15:30] <ericsnow> cherylj: glad to do it
[15:41] <natefinch> katco: back
[15:41] <katco> natefinch: moonstone
[15:41] <natefinch> katco: going
[15:56] <frankban> cherylj: I am available now
[15:58] <cherylj> hey frankban, I saw that the embedded-gui branch had a blessed CI run.  Are there things in there you want to get merged into master?
[15:58] <cherylj> is it ready?
[15:59] <frankban> cherylj: we already merged what's ready there, and we'll need to merge more form there before eow
[16:00] <fwereade> hmm, it seems the cat has learned to unlock the front door from inside on her own
[16:00] <fwereade> this may be an actual problem
[16:05] <cherylj> frankban: sorry, I'm in a stand up, so my responses are slow.   Are you done with what you need to put into embedded-gui?
[16:06] <ericsnow> voidspace: is https://github.com/juju/juju/pull/4995 based correctly? (will merge correctly)
[16:06] <frankban> cherylj: no, we are working this week on remaining stuff (basically retrieving the GUI from simplestreams)
[16:06] <ericsnow> voidspace: sometimes an out-of-sync base can cause RB trouble
[16:06] <cherylj> frankban: ok, thanks.
[16:06] <cherylj> frankban: I have a bundle bug question too
[16:06] <cherylj> frankban: can you take a quick look at bug 1564057?
[16:06] <mup> Bug #1564057: juju2: Charms fail with series mismatch when deployed to containers in bundle <juju-core:Triaged> <https://launchpad.net/bugs/1564057>
[16:08] <cherylj> frankban: I took an initial stab at fixing, but not sure if that's the right fix
[16:09] <cherylj> frankban: it seems weird to require "series" in the charm stanza when it could be implied from the charm store url?
[16:11] <frankban> cherylj: looking
[16:16] <rogpeppe> dooferlad, frobware: i've managed to reproduce the bug i talked about this morning
[16:16] <voidspace> rogpeppe: our team hasn't touched the code around multiple models
[16:17] <rogpeppe> voidspace: ok, but this is really a networking issue
[16:17] <voidspace> rogpeppe: everything juju does involves networking :-)
[16:17] <rogpeppe> voidspace: on at least one ec2 region, if you create a model with different aws creds, the instances are network-isolated from the controller
[16:18] <frobware> rogpeppe: otp (sorry!)
[16:18] <voidspace> rogpeppe: ah right - so the controller needs to use the public ips to talk to the controller then
[16:18] <voidspace> rogpeppe: because the model *is* network isolated from the controller
[16:18] <rogpeppe> voidspace: no, it doesn't seem to be able to use public ips either
[16:19] <voidspace> rogpeppe: doesn't seem to be *able* to use them (it tries and they don't work)? weird
[16:19] <rogpeppe> voidspace: how are the units meant to talk to the API server then?
[16:19] <voidspace> rogpeppe: two different aws accounts *are* network isolated
[16:19] <voidspace> rogpeppe: if you're using the same meta account you might be able to setup routing between them
[16:19] <rogpeppe> voidspace: so why does it work in us-east?
[16:20] <voidspace> but the public ips should still work
[16:20] <rogpeppe> voidspace: we're relying on model units being able to talk to the controller API server
[16:20] <rogpeppe> voidspace: i'll just check
[16:21] <frankban> cherylj: I agree we should infer the series from the charm stanza, from the URL, fallback to series, fallback to global bundle one I guess
[16:21] <frankban> cherylj: that's a good bug
[16:22] <cherylj> frankban: okay, so passing it into the addMachineParams part looked ok?
[16:22] <rogpeppe> voidspace: the env got torn down, just trying again
[16:22] <ericsnow> natefinch: which are the patches you need reviewed still?
[16:22] <frankban> cherylj: let me check the diff
[16:23] <voidspace> rogpeppe: did this used to work? If it never worked I still say it's a bug in the multiple model implementation. :-)
[16:23] <rogpeppe> voidspace: i'm not gonna argue who's responsible
[16:23] <voidspace> rogpeppe: if we've broken it then fair enough. (Like everyone else we have a lot on our plate)
[16:24] <voidspace> rogpeppe: I just don't think we have capacity to take this on.
[16:24] <rogpeppe> voidspace: neither you nor anyone else
[16:24] <voidspace> rogpeppe: I'll help in any way I can with diagnosis
[16:24] <voidspace> rogpeppe: sure
[16:24] <rogpeppe> voidspace: but it's a critical bug AFAICS
[16:24] <voidspace> rogpeppe: in which case we have to look at who *is* responsible.
[16:24] <voidspace> rogpeppe: you may well be right :-(
[16:25] <rogpeppe> voidspace: i'm not pointing the finger at you - i just know that your team's been doing some of the network stuff
[16:25] <natefinch> ericsnow: anything on reviewboard withouit reviews.... hold off on the long lived macaroon one, though, since that's still WIP
[16:25] <voidspace> rogpeppe: fair enough
[16:25] <natefinch> ericsnow: (just marked it as such)
[16:25] <voidspace> rogpeppe: just a bit touchy about workload right now!
[16:25] <rogpeppe> voidspace: so thought i might get a useful answer (which I have, thanks!)
[16:25] <rogpeppe> voidspace: aren't we all?
[16:26] <ericsnow> natefinch: what about "WIP: sprinkle..."
[16:26] <voidspace> rogpeppe: :-)
[16:26] <voidspace> natefinch: we have a few branches up for review old boy
[16:26] <voidspace> natefinch: this is the oldest http://reviews.vapour.ws/r/4425/
[16:27] <natefinch> ericsnow: lemme check on that one.. might be ready to remove the WIP
[16:27] <voidspace> natefinch: and this one didn't make it onto reviewboard https://github.com/juju/juju/pull/4995
[16:28] <natefinch> voidspace: ok, I'll try to take a look.  I'm slammed with work due Friday, but I know I'm on call today, so will do my best.
[16:28] <rogpeppe> voidspace: we've got an instance that is currently exhibiting the issue if you've got a moment to spare
[16:28] <frankban> cherylj: your changes look good, except for the fallback that needs to be implemented, and if I am  not missing something, perhaps we should pass the series also in the case a new machine is created to place a unit?
[16:28] <voidspace> rogpeppe: I'm pairing with babbageclunk but I can multitask on IRC
[16:29] <voidspace> rogpeppe: he knows what he's doing right now anyway
[16:30] <cherylj> frankban: that's probably true.  The bundle I was looking at specified the series for the machines when declaring them in the 'machines' section
[16:30] <ericsnow> natefinch: is http://reviews.vapour.ws/r/4269/ ("backing support...") still active?
[16:31] <natefinch> ericsnow: yes
[16:31] <ericsnow> natefinch: k
[16:31] <cherylj> frankban: I can test what happens with out that
[16:31] <natefinch> ericsnow: all my branches are active... just removed the WIP from the sprinkle one
[16:31] <ericsnow> natefinch: reviewing now
[16:31] <frankban> cherylj: a placement can just specify "new", without referring to a declared machine
[16:31] <frankban> cherylj: I thinkin that case we should infer the series
[16:31] <cherylj> frankban: ah, ok, I can try that too
[16:32] <cherylj> frankban: is this bug something you could take?  I can try to get to it later this week if not
[16:33] <cherylj> katco: interesting lxd issue I consistently hit:  bug 1566420.  Not that it's urgent, just very weird.  Thought your team might find it interesting
[16:33] <mup> Bug #1566420: lxd doesn't provision instances on first bootstrap in new xenial image <juju-core:Triaged> <https://launchpad.net/bugs/1566420>
[16:34] <katco> cherylj: eh? that does seem weird
[16:34] <cherylj> katco: very.
[16:34] <frankban> cherylj: that would be awesome, we could sync up later this week and try to find a slot?
[16:35] <cherylj> katco: I usually provision a new xenial machine in aws and install juju2 on it to test lxd in a clean environment, and I hit this every time
[16:36] <cherylj> frankban: sure, sounds good
[16:36] <katco> cherylj: i can't immediately think of any theories as to why that would happen...
[16:36] <frankban> cherylj: thanks
[16:36] <cherylj> katco: I guess I should upload the machine-0.log while I still have this up.  Not that it's hard to reproduce
[16:37] <cherylj> katco: yeah, it's a weird, wtf kinda bug
[16:37] <cherylj> so I had to share :)
[16:37] <katco> hehe
[16:38] <rogpeppe> voidspace: ok, i'm now on the instance
[16:38] <rogpeppe> voidspace: i can't ping the controller on its public ip address
[16:38] <rogpeppe> voidspace: or its private one
[16:38] <rogpeppe> voidspace: can you think of another address to try?
[16:38] <voidspace> rogpeppe: wow
[16:39] <voidspace> rogpeppe: is this from another aws account or from *anywhere*?
[16:39] <rogpeppe> voidspace: ah!
[16:39] <rogpeppe> voidspace: i can't ping but i can connect to 17070
[16:39] <rogpeppe> voidspace: phew
[16:39] <voidspace> :-)
[16:40] <rogpeppe> voidspace: so it's a simple(ish) fix - the cloudinit script can't always use the private ip address
[16:40] <rogpeppe> voidspace: (but it can't always use the public one either, marvellous :))
[16:41] <voidspace> rogpeppe: if it's different credentials it has to use the public, for same credentials it has to use the private?
[16:41] <rogpeppe> voidspace: i wonder how this manages to work ok on us-east
[16:41] <rogpeppe> voidspace: something like that... but the rules might be different for different providers
[16:41] <voidspace> yep
[16:41] <voidspace> and it depends on the controller provider and the model provider combo
[16:42] <rogpeppe> voidspace: really it should probably do what the Go logic does and try all addresses
[16:42] <natefinch> voidspace: I take we only support maas 1.9+ now?
[16:42] <voidspace> natefinch: yes, we've already dropped 1.8 support on master
[16:42] <voidspace> rogpeppe: yes - trying all of them seems like the only sane option
[16:42] <rogpeppe> voidspace: difficult to do in a shell script
[16:43] <natefinch> voidspace:  cool, I love dropping legacy support
[16:43] <voidspace> rogpeppe: right
[16:44] <voidspace> natefinch: yeah, we cut out about 1300 lines of code from the maas provider (and tests)
[16:44] <mgz> do we actually aim to support cross-region controllers?
[16:46] <mup> Bug #1566414 opened: juju block storage on ec2 does not default to ebs-volumes <juju-core:New> <https://launchpad.net/bugs/1566414>
[16:46] <mup> Bug #1566420 opened: lxd doesn't provision instances on first bootstrap in new xenial image <juju-core:Triaged> <https://launchpad.net/bugs/1566420>
[16:51] <cherylj> For anyone else hitting issues where kill-controller hangs in a loop trying to destroy resources - I've opened up bug #1566426
[16:51] <mup> Bug #1566426: kill-controller should always work to bring down a controller <juju-release-support> <kill-controller> <juju-core:Triaged> <https://launchpad.net/bugs/1566426>
[16:51] <natefinch> babbageclunk: for http://reviews.vapour.ws/r/4425/ you have a shipit
[16:54] <voidspace> natefinch: yay, thanks
[16:54] <voidspace> natefinch: subsequent PRs include earlier ones I'm afraid as we're building incrementally.
[16:57] <rogpeppe> voidspace: https://bugs.launchpad.net/juju-core/+bug/1566431
[16:57] <mup> Bug #1566431: cloud-init cannot always use private ip address to fetch tools (ec2 provider) <juju-core:New> <https://launchpad.net/bugs/1566431>
[16:57] <voidspace> rogpeppe: thanks
[16:59] <natefinch> voidspace: I started to notice.  understandable
[17:06] <natefinch> babbageclunk: lgtm on https://github.com/juju/juju/pull/4995
[17:07] <mup> Bug #1566426 opened: kill-controller should always work to bring down a controller <juju-release-support> <kill-controller> <juju-core:Triaged> <https://launchpad.net/bugs/1566426>
[17:07] <mup> Bug #1566431 opened: cloud-init cannot always use private ip address to fetch tools (ec2 provider) <juju-core:New> <https://launchpad.net/bugs/1566431>
[17:17] <voidspace> natefinch: thanks nate
[17:40] <mgz> katco: yeah, the keystone 3 changes are more likely
[17:40] <mgz> katco: sorry I didn't have more time for narrowing down the specifics last week, was stuck in packaging head space
[17:40] <katco> mgz: no worries at all
[17:41] <mgz> katco: I'd suggest turning on goose debugging and looking at the output you get when requesting a token from the lcy02 v2 versus v3 endpoints
[17:41] <mgz> it's likely a configuration issue with canonistack that change is exposing
[17:41] <mgz> maybe we should not default to v3, or have some fallback logic, or...
[17:42] <mgz> right, cafe time is up, later all
[17:42] <katco> mgz: that's where i was headed: probably improperly configured canonistack. want to pin it down though
[17:42] <mgz> katco: the keystone v2 output for getting a token looked fine to me, had all the bits
[17:42] <katco> mgz: i verified it is attempting to use the v3 endpoint
[17:42] <katco> mgz: check out the links section: https://keystone.canonistack.canonical.com/v3/
[17:43] <mgz> katco: can probably dump serviceURLs at the juju level as well
[17:43] <katco> mgz: looks suspect
[17:43]  * mgz really quits
[17:43] <katco> :)
[18:01] <mup> Bug #1566450 opened: Juju claims not authorized for LXD <bootstrap> <ci> <lxd> <juju-core:Incomplete> <juju-core maas2:Triaged> <https://launchpad.net/bugs/1566450>
[18:01] <mup> Bug #1566452 opened: Win client cannot talk to Ubuntu controller <api> <ci> <windows> <juju-core:Incomplete> <https://launchpad.net/bugs/1566452>
[18:12] <perrito666> anyone gets lxd failing to bootstrap waiting for address?
[18:12] <cmars> perrito666, if you lxc exec into the container, is it stuck at mountall?
[18:13] <cmars> basically, this: https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1555760
[18:13] <mup> Bug #1555760:  Too many levels of symbolic links /proc/sys/fs/binfmt_misc  <binfmt-support (Ubuntu):Confirmed> <systemd (Ubuntu):Incomplete> <https://launchpad.net/bugs/1555760>
[18:14] <cmars> that'll hang a bootstrap
[18:18] <perrito666> cmars: lxc  list marks it as running
[18:19] <cmars> perrito666, yeah, it'd be running, but if you exec bash in it, if there's no sshd running, check /var/log/upstart/mountall.log
[18:19] <cmars> if there's that binfmt_misc error at the tail end of the log, its the systemd bug
[18:19] <perrito666> aghh juju killed it, let me try again
[18:20] <perrito666> cmars: tx
[18:23] <perrito666> mm, not that
[18:23] <perrito666> something might be dirty in my system
[18:23] <tvansteenburgh> hey guys, at what point does a newly created controller/model get written to ~/.local/share/juju/models/cache.yaml
[18:25] <mup> Bug #1559277 changed: cannot set initial hosted model constraints: i/o timeout <bootstrap> <ci> <juju-core:Invalid> <juju-core admin-controller-model:Fix Released> <https://launchpad.net/bugs/1559277>
[18:31] <tvansteenburgh> asking b/c i just upgraded to beta3, i've bootstrapped a controller, created a model, and deployed a charm, and yet this controller/model don't exist in my cache.yaml
[18:31] <voidspace> tvansteenburgh: it's probably now in models.yaml
[18:31] <voidspace> tvansteenburgh: I'm not even sure cache.yaml exists now
[18:31]  * tvansteenburgh looks
[18:31]  * voidspace is not really here
[18:32] <voidspace> gotta go o/
[18:33] <tvansteenburgh> hrm, well it is in models.yaml, but the parts i need aren't, specifically api-endpoints and admin-secret
[18:34] <tvansteenburgh> thanks anyway voidspace
[18:36] <voidspace> tvansteenburgh: there's also controllers.yaml
[18:36] <voidspace>  /me is still not really here
[18:38] <tvansteenburgh> aha, between that and accounts.yaml, i can get what i need, thanks voidspace, you were not here when i needed you
[18:38] <natefinch> lol
[19:00] <cherylj> tych0: just fyi - your "better lxd configuration" PR merged.  (In case you didn't see it yet)
[19:01] <tych0> cool
[19:01] <tych0> thanks, should have another one coming shortly
[19:31] <natefinch> huzzah for unit tests finding bugs
[19:40] <natefinch> oh ho... all tests pass.  Finally.
[19:44] <marcoceppi> natefinch: is there anyway to get the name of a controller from the API?
[19:45] <natefinch> marcoceppi: buh
[19:45] <natefinch> thumper: ^ ?
[19:46] <thumper> um...
[19:46] <thumper> no
[19:46] <thumper> marcoceppi: there are two names :)
[19:46]  * thumper thinks
[19:46] <thumper> no, just one
[19:46] <thumper> what the user called it
[19:46] <thumper> the controller model is always called "admin" I believe
[19:46] <thumper> or something like that
[19:47] <marcoceppi> thumper: sure, so we can get model names and UUIDs, we can get controller uuid, can I get controller name from api?
[19:48] <thumper> if you have access to the controller model, it will show up in your list
[19:48] <thumper> however the "name" of the controller is what you called it
[19:48] <thumper> the controller name is "admin"
[19:48] <thumper> that is the name of the controller model
[19:48] <thumper> the name you gave it is what you use in switch
[19:49] <thumper> or the command line
[19:49] <thumper> AFAICT
[19:49] <natefinch> thumper: so do we store the name you gave it in mongo, or is that purely a locally stored value that maps to a UUID that is stored in mongo?
[19:49] <marcoceppi> thumper: `juju bootstrap fart lxd` gives me admin and default, if I go to another machine and use the controller API url and password
[19:49] <marcoceppi> thumper: how would I get "fart" ?
[19:50] <thumper> marcoceppi: I'm not sure you could
[19:50] <thumper> natefinch: correct, the name in mongo is "admin"
[19:50] <marcoceppi> thumper: is name only stored locally?
[19:50] <thumper> the name on marcoceppi's machine is "fart"
[19:50] <thumper> marcoceppi: I think so, yes
[19:50] <marcoceppi> thumper: so, how will jaas do this?
[19:50] <natefinch> marcoceppi: the name you gave it is just an alias for the UUID, effectively, an alias stored locally on the machine from which you bootstrapped it
[19:50] <thumper> marcoceppi: when you use a controler, or login in, you give it a name
[19:51] <marcoceppi> thumper natefinch can we like, put the controller name in the controller?
[19:51] <thumper> no
[19:51] <marcoceppi> please?
[19:51] <thumper> no
[19:52] <thumper> there was explicit direction given to call it "admin"
[19:52] <natefinch> marcoceppi: It's definitely very much not up to me :)
[19:52] <thumper> the model name that is
[19:52] <marcoceppi> thumper: but I don't want the model name
[19:52] <marcoceppi> I want the name, to the controller, running all the models
[19:52] <thumper> there is no controller name
[19:52] <thumper> just what you call it
[19:52] <marcoceppi> but, fart, is a name
[19:52] <thumper> no
[19:52] <thumper> it is what you called it
[19:52] <thumper> it is your alias
[19:52] <marcoceppi> I brought it into this world
[19:52] <marcoceppi> and I want it to produly know it's name
[19:52] <natefinch> it could be a name.. we choose not to store that value in the controller
[19:53] <thumper> ok... well, right now, there is no way to do this
[19:53] <marcoceppi> right
[19:53] <marcoceppi> I want that name in there
[19:53] <thumper> and it won't be in 2.0
[19:53] <thumper> not enough time
[19:53] <marcoceppi> this really fucks up a bunch of stuff we're trying to do
[19:53] <marcoceppi> because no one cares about a stupid uuid
[19:53] <thumper> :)
[19:53] <marcoceppi> and I can't distinguish two controllers from each other
[19:53] <thumper> how about you email the dev list with what you are really trying to do
[19:53] <natefinch> marcoceppi: how do you pass around credentials?
[19:53] <thumper> and we'll see what solution we can come up with
[19:53] <natefinch> marcoceppi: just pass around the name with the credentials
[19:54] <thumper> we'd get more eyes on it then
[19:54] <marcoceppi> so if I have two controllers, and each controller has a "benchmark" model, and I have data from all of those in a central repo. I either show duplicate models names or disambiguate with <controller_name>:model
[19:54] <marcoceppi> but now controller_name is UUID
[19:54] <marcoceppi> and I have to punch my user in the face with that
[19:54] <marcoceppi> natefinch: the credentials are being set on login, like iwth juju-gui
[19:55]  * marcoceppi emails the list
[19:55] <thumper> marcoceppi: yes
[19:55] <thumper> marcoceppi: we currently don't model a controller name
[19:55] <natefinch> marcoceppi: but wherever you're storing the credentials and IP address of the controller, you could store the name of the controller that goes with the IP address, couldn't you?  I know it's not ideal.
[19:56] <marcoceppi> thumper: it seems like we could, since we havea controller_uuid, just add controller_name since  - you know - mongodb doens't have strong data structs
[19:56] <thumper> marcoceppi: but we do
[19:56] <marcoceppi> natefinch: we get the controller ip the same way the gui does - it's deploye din the controller
[19:56] <marcoceppi> natefinch: all we need is username and password, like the gui
[19:56] <marcoceppi> if I ask you to name the controller each time it seems sily - they already did that when they created the controller
[19:57] <thumper> marcoceppi: but the controller name is entirely at the discression of the creator
[19:57] <thumper> marcoceppi: I have a controller called "stuff" and so does natefinch
[19:57] <thumper> now what?
[19:57] <thumper> both have models called "benchmarks"
[19:58] <thumper> you need to disambiguate through user not just name
[19:58] <marcoceppi> thumper: we will
[19:58] <marcoceppi> thumper: in the saas endpoint we're building
[19:58] <marcoceppi> it's user - controller - model
[19:58] <thumper> ok
[19:58] <marcoceppi> where use can only see the controllers they've submitted benchmark data for
[19:58] <thumper> but I also think that the user should be able to give their controller a name you show
[19:58] <thumper> not necessarily the name that they called it when they created it
[19:59] <thumper> I don't personally see any problem of asking them for a name to show it as when they register their controller
[19:59] <marcoceppi> thumper: I see it as a UX flub, because we may never ask them if this is done automated
[20:00]  * thumper shrugs
[20:00] <marcoceppi> thumper: also, models have names, and they're stored in the environment, and they're descritionary and created by the user
[20:00] <marcoceppi> thumper: we store them, and have them, I'd just like the same for a controller
[20:00] <thumper> they are also non-writable
[20:00] <marcoceppi> non-writable?
[20:00] <thumper> can't change
[20:00] <thumper> histerical raisins
[20:00] <thumper> no real reason for it
[20:00] <marcoceppi> can't change the controller name either?
[20:00] <thumper> but the name used to be used for cloud resouces
[20:01] <thumper> what controller name?
[20:01]  * thumper chuckles
[20:01] <marcoceppi> I will fly to austraila
[20:01] <marcoceppi> so I can laugh at you from across the water
[20:01] <thumper> ha
[20:01] <natefinch> lol
[20:01] <marcoceppi> then curl up and cry as all the animals eat me
[20:01] <natefinch> marcoceppi: FWIW, I agree with you.
[20:02] <thumper> marcoceppi: there is no reason we couldn't except time and effort
[20:02] <thumper> but we currently don't
[20:03] <marcoceppi> this whole conversation made me regret asking
[20:03] <thumper> sorry
[20:57] <marcoceppi> how do I get information about the cloud of a model/controller?
[20:57] <marcoceppi> like name, region, etc
[21:02] <mup> Bug #1566531 opened: Instances are left behind testing Juju 2 <ci> <destroy-environment> <ec2-provider> <jujuqa> <juju-ci-tools:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/1566531>
[21:06] <perrito666> ok something new is broken :p juju is looking for lxcbr0 and I have lxdbr0
[21:14] <marcoceppi> thumper: how about getting the cloud name from the api?
[21:15] <marcoceppi> I've got provider-type, but that's ec2, etc, I want aws or in the case of openstack, the name of the openstack cloud the user created
[21:24] <cherylj> marcoceppi: juju list-clouds
[21:24] <marcoceppi> cherylj: from the api
[21:24] <cherylj> well so sorry if I didn't like read your whole question and stuff
[21:24] <cherylj> heh
[21:24] <cherylj> let me look
[21:24] <cherylj> marcoceppi: wait
[21:24] <cherylj> that wouldn't be an api call
[21:24] <marcoceppi> cherylj: so we don't store it.
[21:24] <cherylj> that's strictly from a local system
[21:25] <marcoceppi> grrrrrrrrrrrrrrrr
[21:25] <cherylj> no, it's all local in your clouds.yaml
[21:25] <marcoceppi> no single source of truths for things is really grinding me today.
[21:25] <cherylj> marcoceppi: except the public clouds
[21:25] <cherylj> that's stored in streams?  I think?
[21:25] <cherylj> so it can be updated
[21:26] <marcoceppi> cherylj: but how do i tell the name of the cloud for a deployed controller/model/environment
[21:26] <cherylj> marcoceppi: let me see if I can figure that out for you
[21:32] <mup> Bug #1533431 changed: Bootstrap fails inexplicably with LXD local provider  <2.0-count> <docteam> <juju-release-support> <juju-core:Invalid> <https://launchpad.net/bugs/1533431>
[21:35] <davecheney> mwhudson: http://seclists.org/oss-sec/2016/q2/11
[21:40] <mwhudson> davecheney: oh good!
[21:41] <anastasiamac> marcoceppi: at cli, running `juju show-controller` will show you bootstrp config including cloud name like 'aws'
[21:41] <anastasiamac> marcoceppi: at least master tip will
[21:41] <mwhudson> davecheney: any idea on timelines?
[21:41] <marcoceppi> anastasiamac: cool, but I really need to get this from the API
[21:42] <davecheney> mwhudson: do you know jason ?
[21:42] <davecheney> i could ask him
[21:42] <davecheney> but i could just introduce you and you could ask him yourself
[21:42] <davecheney> I'd expect before the end of this US week
[21:43] <mwhudson> we've emailed a little i guess, you're right i should just ask
[21:44] <anastasiamac> marcoceppi: afaik, there is no api... it's all client-side in a file... cli effectively just parses local file(s)
[21:47] <thumper> ugh...
[21:47] <thumper> in trying to use my new API I've realised more changes I need to make...
[21:47] <thumper> fooey
[22:05] <mup> Bug #1566545 opened: max log age should be exposed as a configuration option <sts> <juju-core:New> <https://launchpad.net/bugs/1566545>
[22:50] <thumper> ugh...
[22:53]  * thumper needs to write tests... many tests
[23:08] <menn0> thumper: can I have a quick hangout with you?
[23:08] <thumper> yep
[23:09] <menn0> thumper: 1:1
[23:16] <axw> redir: standup?
[23:24] <redir> woop
[23:24] <redir> s
[23:24] <redir> still there axw?
[23:24] <axw> redir: yup
[23:37] <redir> perrito666: the rename workaround works perfect thanks
[23:37] <perrito666> redir: glad to help
[23:37] <perrito666> see you all, good night