[00:39] <alexisb> wallyworld, looks like curtis did open a bug: https://bugs.launchpad.net/juju/+bug/1629985
[00:39] <mup> Bug #1629985: TestNewModelConfig can't connect to the local LXD server (lxd 2.3) <ci> <lxd> <regression> <unit-tests> <xenial> <yakkety> <juju:Triaged by wallyworld> <https://launchpad.net/bugs/1629985>
[00:39] <alexisb> I assigned it to you
[00:39] <wallyworld> ok
[00:39] <wallyworld> alexisb: but it should be assigned to tycho
[00:39] <wallyworld> as it is his pr
[00:40] <wallyworld> i assume he will be the one landing the fix
[00:40] <alexisb> tych0, are you around to land you PR after wallyworld reviews?
[00:40] <wallyworld> i am testing it and also reviewing
[00:48] <thumper> https://github.com/juju/juju/pull/6372 anyone?
[00:48] <thumper> I'm off to give the dog a quick walk while the sun is shining
[01:04] <mup> Bug #1629113 opened: Juju deploy wordpress fails with MaaS <juju-core:New> <https://launchpad.net/bugs/1629113>
[02:09] <menn0> thumper: remind me how to see the running workers/goroutines in a controller agent?
[02:09] <menn0> using the developer-mode stuff
[02:10] <thumper> juju-goroutines
[02:11] <menn0> thumper: thanks
[03:29] <wallyworld> axw: howdy, got time for a hangout about credentials?
[03:30] <axw> wallyworld: yup, just give me a minute please
[03:30] <wallyworld> sure
[03:31] <axw> wallyworld: see you in 1:1?
[03:31] <wallyworld> yup
[04:37] <menn0> wallyworld: I *think* I've got a fix for the problem of Juju taking ages to recover when the primary controller node goes away
[04:37] <wallyworld> oh stop it, now you're just teasing
[04:39] <menn0> wallyworld: with the fix in place the cluster recovers (i.e. apiservers back up) in under 30s
[04:39] <menn0> and a good chunk of that time is mongodb arranging a new primary
[04:39] <wallyworld> wow, finally!
[04:39] <wallyworld> menn0: was it a mgo fix?
[04:39] <menn0> the fix is hacky at the moment so I won't be proposing it until tomorrow
[04:39] <menn0> nope
[04:40] <menn0> it's around HackLeadership and the handling of state in the apiserver
[04:40] <wallyworld> joy
[04:40] <menn0> the fix also means that the apiserver no longer needs it's own copy of state
[04:40] <menn0> i'm going to help out at home for a bit now and tidy up the fix later on this evening
[04:40] <wallyworld> sgtm
[09:08] <babbageclunk> Anyone else using lxd 2.3?
[09:13] <babbageclunk> I'm getting weirdly slow juju bootstraps with it - not sure whether it's me or them.
[09:49] <babbageclunk> mwhudson: ping?
[09:49] <mwhudson> babbageclunk: mmf
[09:50]  * babbageclunk isn't sure whether that means you're ready to receive or not.
[09:50] <mwhudson> babbageclunk: it means i should be in bed but i'm not so be quick :)
[09:50] <babbageclunk> mwhudson: Ok! :)
[09:51] <babbageclunk> mwhudson: I'm trying to find a memory leak in Juju
[09:51] <babbageclunk> mwhudson: I found something that purports to view heapdumps, but it needs a patch to Go.
[09:52] <babbageclunk> mwhudson: I was looking at the patch before I tried building it myself and saw a TODO with your name on it.
[09:52] <mwhudson> babbageclunk: ok
[09:52] <mwhudson> heh
[09:52] <mwhudson> uh oh
[09:52] <mwhudson> where is it?
[09:53] <babbageclunk> mwhudson: So I thought, "Hey, I wonder whether he knows anything about viewing heap dumps."
[09:53] <babbageclunk> https://github.com/tombergan/goheapdump/blob/master/runtime.heapdump.go.patch
[09:53] <mwhudson> oh
[09:54] <babbageclunk> I've tried a couple of things so far but not managed to get anything to work.
[09:54] <mwhudson> that's boring, basically if you have a go program used shared libraries you only get part of the info
[09:55] <mwhudson> babbageclunk: sorry don't really know anything, i'm afraid
[09:55] <babbageclunk> ok, thanks anyway! Sorry to keep you up!
[09:56] <mwhudson> babbageclunk: i wonder why he hasn't sent that patch upstream
[09:56] <mwhudson> babbageclunk: no worries, sorry i couldn't help
[09:57] <babbageclunk> mwhudson: Well, I think it's something that's still in flux. Found it from here: https://github.com/golang/go/issues/16410
[09:58] <mwhudson> babbageclunk: heh i guess i got all the mails for that issue but just blanked on them :-)
[09:58]  * mwhudson goes to bed
[11:34] <perrito666> morning all
[11:34] <babbageclunk> perrito666: o/
[11:53] <mup> Bug #1630123 opened: OpenStack base 45 not being deployed with Juju GUI <juju-core:New> <juju-gui:New> <https://launchpad.net/bugs/1630123>
[12:27] <perrito666> how do I create a model with a different owner from the cli, does annyone know?
[12:50] <natefinch> cmars, rogpeppe1: I'm having trouble understanding the proper behavior of logout/login got a minute to talk?
[12:52] <cmars> natefinch, not at the moment. i may free at 10:30 central
[12:52] <natefinch> cmars: ok
[13:05] <rogpeppe1> natefinch: what's up?
[13:08] <natefinch> rogpeppe1: I'm confused as to the expected behavior of login/logout
[13:09] <natefinch> rogpeppe1: if I log out of a GCE environment that I created, is juju supposed to prompt for a password when I log back in?
[13:11] <rogpeppe1> natefinch: s/environment/model/ ?
[13:11] <rogpeppe1> natefinch: this is without an identity-url configured, right?
[13:11] <natefinch> rogpeppe1: yeah, I just bootstrapped with the normal defaults to google
[13:12] <rogpeppe1> natefinch: what did the logout command print?
[13:12] <natefinch> rogpeppe1: and yes, I guess it's logging into the modelk?  or the controller? I'm not sure
[13:13] <natefinch> rogpeppe1: $ juju login
[13:13] <natefinch> username: admin
[13:13] <natefinch> You are now logged in to "gce" as "admin@local".
[13:13] <rogpeppe1> natefinch: you always log into a controller, although you may choose an API connection specific to a model when you log in
[13:14] <rogpeppe1> natefinch: what did the logout command print?
[13:14] <natefinch> rogpeppe1: $ juju logout
[13:14] <natefinch> Logged out. You are still logged into 1 controller.
[13:14] <natefinch> (after changing my password)
[13:15] <natefinch> rogpeppe1: the bug I'm working on says I should be prompted for my password when I log back in, but I'm not sure when that's the case and when it's not, since I know there's some macaroon stuff...
[13:15] <natefinch> rogpeppe1: https://bugs.launchpad.net/bugs/1621375
[13:15] <mup> Bug #1621375: "juju logout" should clear cookies for the controller <juju:In Progress by natefinch> <https://launchpad.net/bugs/1621375>
[13:15] <rogpeppe1> natefinch: after logging out, what do you see in accounts.yaml ?
[13:15] <natefinch> rogpeppe1:   gce:
[13:15] <natefinch>     user: admin@local
[13:15] <natefinch>     last-known-access: superuser
[13:17] <rogpeppe1> natefinch: hmm, that's odd - it should have deleted the entry
[13:17] <rogpeppe1> natefinch: although it doesn't have any password, which is also odd
[13:17] <natefinch> rogpeppe1: where is the code that controls the password etc?  I can't seem to find it
[13:18] <rogpeppe1> natefinch: what does "juju switch" print?
[13:18] <rogpeppe1> natefinch: inside the jujuclient package
[13:18] <natefinch> rogpeppe1: at the time, gce was my default controller... I have since messed with things
[13:19]  * natefinch rebootstraps to do more testing
[13:19] <rogpeppe1> natefinch: so if you do: "juju switch gce; juju logout", so you still have gce entry in your accounts.yaml ?
[13:19] <rogpeppe1> natefinch: you shouldn't need to re-bootstrap
[13:19] <rogpeppe1> natefinch: this is a client-side problem AFAICS
[13:20] <natefinch> rogpeppe1: yes, I just couldn't find the client side code :)
[13:20] <natefinch> rogpeppe1: also, I wasn't clear on when it should prompt and when it shuld not, based on macaroons and all that jazz
[13:22] <natefinch> rogpeppe1: where's the code that produces the password prompt?  some naive greps under jujuclient isn't finding much
[13:25] <rogpeppe1> natefinch: after it's told you it's logged in, can you actually use the connection (e.g. use juju status, juju list-models, etc?)
[13:26] <natefinch> rogpeppe1: yep
[13:26] <natefinch> rogpeppe1: just tried it
[13:27] <rogpeppe1> natefinch: ISTR there's a "local login" macaroon created somewhere, and it looks as if logout is failing to delete it
[13:27] <rogpeppe1> natefinch: axw is the man to talk to
[13:28] <natefinch> rogpeppe1: k, thanks
[13:32] <rogpeppe1> natefinch: it's almost certainly in the cookie jar
[13:33] <rogpeppe1> natefinch: if you get logout to remove all cookies in the cookie jar associated with the controller, it'll probably work
[13:34] <rogpeppe1> natefinch: FWIW the password prompting is hooked up in modelcmd.NewAPIContext (environschema.v1/form is the thing that actually does the prompting)
[13:36] <rogpeppe1> natefinch: hmm, actually i'm not sure about that
[13:41] <alexisb> babbageclunk, ping
[13:45] <natefinch> rogpeppe1: ok, thanks, I'll look at that....
[13:46] <natefinch> rogpeppe1: totally would not expect a function called NewAPIContext to do login stuff
[13:49] <rogpeppe1> natefinch: actually, the user password stuff is done in api/authentication
[13:50] <rogpeppe1> natefinch: in Visitor.VisitWebPage
[13:51] <natefinch> rogpeppe1: yeah, I was just looking there.  Again.. VisitWebPage is not a function I would expect to prompt for a password via the command line :/
[13:51] <mup> Bug #1630123 changed: OpenStack base 45 not being deployed with Juju GUI <juju-gui:Invalid> <https://launchpad.net/bugs/1630123>
[13:51] <rogpeppe1> natefinch: yeah, history plays a role in that name
[13:52] <rogpeppe1> natefinch: i think "Interact" might be a better name really
[13:56] <rogpeppe1> natefinch: i'm going for lunch now
[13:57] <natefinch> rogpeppe1: kk
[13:57] <natefinch> rogpeppe1: thanks for the help
[13:57] <rogpeppe1> natefinch: np
[13:59] <voidspace> mgz: pingg
[13:59] <mgz> voidspace: yo
[14:00] <voidspace> mgz: I need help with vsphere and the company vpn
[14:00] <voidspace> mgz: ah, it's standup time
[14:00] <mgz> I'm in standup for fast bandwidth options too
[14:00] <voidspace> mgz: cool
[14:00] <mgz> happy to hang on after
[14:01] <perrito666> oh, look at that, you cant add-model for a user other than you when you have add-model
[14:01] <perrito666> rick_h_: alexisb-afk is that the intended behavior?
[14:01] <rick_h_> natefinch: ping for standup
[14:01] <rick_h_> perrito666: huh?
[14:01] <perrito666> rick_h_: lets say I grant you addmodel
[14:01] <perrito666> you can juju addmodel mymodel
[14:01] <perrito666> which makes you happy
[14:02] <perrito666> but you can not juju add-model --owner=perrito hismodel
[14:07] <perrito666> ok, I might get soon disconnected
[14:07] <perrito666> guess who lives where that giant blob of red is going http://www.smn.gov.ar/vmsr/general.php?dir=YVcxaFoyVnVaWE12WVhKblpXNTBhVzVoYzJWamRHOXlhWHBoWkdFdmFXNW1MM05q
[14:09] <alexisb-afk> perrito666, stay safe
[14:09] <perrito666> alexisb-afk: I have a roof :p
[14:09] <perrito666> but I dont have faith on my internet air link
[14:12] <perrito666> rick_h_: ping me after standup so we discuss this plz
[14:13] <rick_h_> perrito666: rgr
[15:06] <perrito666> rick_h_: dont forget me
[15:09] <babbageclunk> perrito666: mind talking through some architecture details with me while you wait for rick_h_? :)
[15:11] <perrito666> babbageclunk: I was thiking on lunching but I can spare a few mins
[15:11] <perrito666> what can I do for you?
[15:12] <rick_h_> perrito666: sorry, otp having fun
[15:12] <rick_h_> perrito666: will ping when free sorry
[15:13] <babbageclunk> perrito666: Thanks! I've tracked down a memory leak to us keeping State instances in the api server statePool even after they're destroyed and never closing them.
[15:13] <babbageclunk> perrito666: I mean, after the corresponding model is destroyed.
[15:14] <babbageclunk> perrito666: And I'm trying to work out how to go about closing them when the model is destroyed.
[15:17] <babbageclunk> perrito666: There doesn't seem to be any way that a state from the pool can be returned to the pool - can state objects be shared between  goroutines/concurrent requests?
[15:17] <perrito666> babbageclunk: doesnt state knows best and figures the model is gone?
[15:19] <perrito666> babbageclunk: point me to the code
[15:19] <babbageclunk> perrito666: Not as far as I can see - at least, the worker stopping done in State.Close() doesn't happen.
[15:19] <babbageclunk> perrito666: finding links...
[15:21] <babbageclunk> perrito666: This is the stuff that doesn't happen: https://github.com/juju/juju/blob/master/state/open.go#L522
[15:21] <babbageclunk> perrito666: state pool: https://github.com/juju/juju/blob/master/state/pool.go
[15:22] <babbageclunk> perrito666: Here's where it gets the state from the pool to serve a request: https://github.com/juju/juju/blob/master/apiserver/apiserver.go#L556
[15:22] <perrito666> so, close is actually never happening?
[15:23] <katco> babbageclunk: this is essentially a connection pool. so the connections to mongo won't be closed until close is called on the pool which (guessing here) will only happen when the process is torn down?
[15:23] <babbageclunk> perrito666: I mean, the server will close the pool if it gets shut down, and that will close the states.
[15:23] <katco> babbageclunk: i.e. it's ok that close isn't being called on the connections -- we want them to stay open
[15:24] <katco> babbageclunk: what is the memory leak you're tracking down?
[15:24] <perrito666> ill let you with katco for a moment as the oven is beeping
[15:24] <babbageclunk> katco: lots of add-models/destroy-models in a row.
[15:24] <babbageclunk> katco: bug 1625774
[15:24] <mup> Bug #1625774: memory leak after repeated model creation/destruction <eda> <oil> <oil-2.0> <uosci> <juju:In Progress by 2-xtian> <https://launchpad.net/bugs/1625774>
[15:24] <babbageclunk> perrito666: Thanks!
[15:25] <katco> babbageclunk: ah i see the logic bug then
[15:25] <babbageclunk> katco: It seems less like a connection pool and more like a cache.
[15:25] <katco> babbageclunk: it's a connection pool (a cache of connections ;p)
[15:26] <katco> babbageclunk: so close can only be called on the pool, not for a model
[15:26] <katco> babbageclunk: so when you do add/destroy modela  bunch of times, you get a lot of connections cached, but connections for the model you destroy hang around for the life of the pool
[15:26] <babbageclunk> katco: :) Sure, but in others I've seen there's a concept of taking the connection out of the pool while it's in use and returning it.
[15:26] <babbageclunk> katco: That's right/
[15:27] <katco> babbageclunk: errr... i made a bad assumption. i wonder why we're not using sync.Pool for this
[15:27] <katco> babbageclunk: i suppose bc we want multiple people using 1 connection
[15:27] <babbageclunk> katco: Ok - that's the bit I wasn't sure about.
[15:28] <katco> babbageclunk: you are correct: this is a cache and not a pool
[15:28] <katco> babbageclunk: so basically all you need to do is to make sure that the code-path the destroys the model closes the connection from the cache and removes it
[15:28] <babbageclunk> katco: Ooh, I didn't know about sync.Pool. Neat.
[15:28] <katco> babbageclunk: yep
[15:29] <katco> babbageclunk: maybe not useful here. i suppose the value in the map here could be a sync.pool
[15:29] <babbageclunk> katco: Yeah, that's what I was thinking - the bit I wasn't sure about was making sure that others weren't using the state when I close it.
[15:31] <katco> babbageclunk: i imagine you'd need a refcount for that. i wonder if we could deduce a logically safe way though wherein a call to invalidate the cache is guaranteed to be the only consumer
[15:32] <katco> babbageclunk: i.e. if you destroy the model, you could create a watcher that waits until the model is destroyed, and then invalidate the cache knowing nothing else could be using it?
[15:32] <babbageclunk> katco: Yeah, that would make sense.
[15:32] <katco> babbageclunk: but i don't know the specifics. that's going to require a deep=dive
[15:33] <babbageclunk> katco: Also, maybe this is the source of some other weird bugs we see around destroy-model - old state objects still swimming in the pool after the model's gone away.
[15:33] <katco> babbageclunk: or the refcount with a drain op. dunno which is cleanest/logically most correct
[15:34] <natefinch> cmars: ping when you get the time about macaroons / logout etc
[15:34] <katco> babbageclunk: like what?
[15:35] <babbageclunk> katco: in the course of reproducing this I've gotten into states where I had models that would show up in list-models but couldn't be destroyed.
[15:35] <cmars> natefinch, i'm available now
[15:36] <katco> babbageclunk: =| i'm not sure open connections to state would cause that?
[15:37] <babbageclunk> katco: I'm not sure it would either, just wondering.
[15:37] <katco> babbageclunk: converge the code towards correctness and we'll get there ;)
[15:37] <babbageclunk> katco: :)
[15:38] <babbageclunk> katco: ok, so you think it's deliberate that the pool allows multiple requests to share states?
[15:38] <natefinch> cmars: ok - https://hangouts.google.com/hangouts/_/canonical.com/core?pli=1&authuser=2
[15:38] <katco> babbageclunk: probably; to limit the number of connections open. ironically to prevent what you're trying to solve
[15:39] <katco> babbageclunk: i know we had a go at reducing memory leaks, and this was probably part of it
[15:39] <katco> babbageclunk: if not, we should be using sync.Pool
[15:40] <katco> babbageclunk: looks like this is over a year old though
[15:40] <babbageclunk> katco: ok, doing some code archaeology to understand the history.
[15:44] <babbageclunk> katco: Thanks for the help! I'll probably have more questions soon. :)
[15:45] <katco> babbageclunk: hth! not an expert on this by any means, but looked like i understood what was happening
[15:46] <babbageclunk> katco: might pick Menno's brains about it too - he's all through the blame for it.
[15:46] <katco> babbageclunk: lol yep
[15:48] <rick_h_> voidspace: is https://bugs.launchpad.net/juju/+bug/1616098 related at all to your investigation?
[15:48] <mup> Bug #1616098: Juju 2.0 uses random IP for 'PUBLIC-ADDRESS' with MAAS 2.0 <4010> <cpec> <juju:Opinion by dimitern> <https://launchpad.net/bugs/1616098>
[16:04] <voidspace> rick_h_: looking
[16:06] <voidspace> rick_h_: I don't believe so
[16:06] <voidspace> rick_h_: the vsphere issue, use fe80:: as an address, which is a loopback address - is a fairly specific problem I think
[16:07] <voidspace> rick_h_: so that bug is a different one - partly impacted by my changes as addresses are now sorted (hence the "lower" address noted in Ante's latest comment)
[16:07] <voidspace> rick_h_: there's currently no space awareness in the public address picking logic
[16:08] <voidspace> rick_h_: but as we have to pick *one* address, it's not immediately clear to me how we should resolve that
[16:08] <voidspace> rick_h_: so bug 1616098 is definitely related to the address picking logic I've worked on, the vsphere one is different
[16:08] <mup> Bug #1616098: Juju 2.0 uses random IP for 'PUBLIC-ADDRESS' with MAAS 2.0 <4010> <cpec> <juju:Opinion by dimitern> <https://launchpad.net/bugs/1616098>
[16:09] <rick_h_> voidspace: k, ty
[16:10] <voidspace> rick_h_: the issue there is that the  machine has multiple public IPs, some of which are not routable, and the address picking logic just has to pick one
[16:10] <voidspace> rick_h_: that logic is currently space unaware - it needs to become space aware and pick an address from a space that the controller is in
[16:10] <rick_h_> voidspace: rgr, have to think on it. It's on my list of stuff we don't seem to have a firm rulebook for
[16:11] <voidspace> rick_h_: I think that rule "pick an IP from space the controller machine is in" works
[16:11] <voidspace> rick_h_: then the public IP is routable from the controller, so we can ssh to it
[16:13] <voidspace> *a space
[16:42] <katco> can someone explain the difference of these two to me? https://github.com/juju/names/blob/v2/application.go#L13 https://github.com/juju/charm/blob/v6-unstable/url.go#L49
[16:43] <rick_h_> the one diff is the firm ending $ on the second.
[16:44] <rick_h_> though not sure what effect that would have
[16:44] <rick_h_> in practice
[16:44] <katco> rick_h_: i'm not looking for a comparison of regexes :p
[16:44] <rick_h_> katco: heh, ok then have to be more clear on "explain the diff" :)
[16:44] <katco> rick_h_: aren't the two concepts equivalent? do we have 2 checks in different libs?
[16:46] <katco> rick_h_: i think "name" in the charm lib is the same as "app" in our names lib? i think we should be centralizing checks in our names lib?
[16:48] <rick_h_> katco: there is the work to make sure they're compatible. I'm not sure why names is not a dep in charm to share the logic.
[16:48] <katco> rick_h_: it is actually a dep... not sure why it isn't just using the logic though
[16:49] <katco> rick_h_: that makes it all the more puzzling =/
[16:49] <rick_h_> katco: ok, then even more "don't know"
[16:49] <katco> rick_h_: haha
[16:49] <katco> rick_h_: would you consider "series" something else that should be validated in names?
[16:49] <rick_h_> katco: I'd suggest asking rogpeppe1 ^ for any justification and if none then carry on
[16:49]  * rogpeppe1 looks
[16:49] <rick_h_> katco: hmm, maybe as far as reserved words go, but not sure how easy series are to pull in like that in a light weight way
[16:50] <katco> rogpeppe1: ta
[16:50] <katco> rick_h_: https://github.com/juju/charm/blob/v6-unstable/url.go#L48
[16:50] <katco> rick_h_: i would just look to pull this same validation logic over
[16:50] <rick_h_> katco: hmm, that's interesting. Since series are hard coded a regex to valid them seems interseting
[16:51] <katco> rick_h_: juju proper does more validation against a fixed list. something else we might want to pull over into names
[16:51] <katco> rick_h_: i think we go in circles on this stuff bc there's no clarity across the team at a high-level as to what libs are for. lots and lots of reimplementation of thing
[16:51] <rogpeppe1> katco: they match identical texts
[16:52] <katco> rogpeppe1: sorry, can you restate?
[16:52] <rogpeppe1> katco: the names one has a few non-grouping qualifiers
[16:52] <rogpeppe1> katco: they're the same expression
[16:52] <katco> rogpeppe1: right, so can we nix the one in charm and consolidate to names?
[16:52] <rogpeppe1> katco: but the names one has ?: qualifiers to prevent subgroup matching
[16:52] <katco> rogpeppe1: also, any opposition to moving the series check over?
[16:53] <rogpeppe1> katco: i think it's probably fine for the charm package to use names.ApplicationSnippet instead of validName
[16:54] <rogpeppe1> katco: the series check?
[16:54] <katco> rogpeppe1: see the regex in charm right above the name regex
[16:54] <katco> rogpeppe1: https://github.com/juju/charm/blob/v6-unstable/url.go#L48
[16:55] <rogpeppe1> katco: does names need to know about series?
[16:55] <katco> rogpeppe1: same question for schema. can we centralize all correctness logic to the names lib?
[16:56] <katco> rogpeppe1: this is my fundamental question: is the names lib where we centralize correctness checks?
[16:56] <rogpeppe1> katco: i'm wary of moving all "name" logic to juju/names. names are not all the same.
[16:57] <katco> rogpeppe1: can you state what you feel the purpose of the name lib is?
[16:57] <rogpeppe1> katco: i think that moving all checks for all name-like things to juju/names is a recipe for mixed concerns.
[16:57] <rogpeppe1> katco: it's about names of juju entities in a juju controller
[16:57] <katco> rogpeppe1: not names of charms themselves?
[16:58] <rogpeppe1> katco: i think that charm urls could be considered separate, yes
[16:58] <rogpeppe1> katco: where do we stop? do we put all the charm URL parsing code in names.v2 too?
[16:59] <katco> rogpeppe1: if that's the case, then i think it's correct that the two are separate. how the controller spells charm entities is only coincidentally (and maybe not permanently) the same
[16:59] <rogpeppe1> katco: yeah, actually, that's right
[16:59] <katco> rogpeppe1: i wonder how we can make that justification clear...
[17:00] <rogpeppe1> katco: an application name really is different from the name of a charm in the charm store
[17:00] <rogpeppe1> katco: even though the default application name is taken from the charm name
[17:01] <katco> rogpeppe1: yeah makes sense
[17:01] <katco> rogpeppe1: ta for the clarity
[17:01] <rogpeppe1> katco: tbh i still don't mind the charm package using ApplicationName for the constant
[17:02] <katco> rogpeppe1: they're not logically coupled, so imo, i don't think it makes sense to artificially do so
[17:03] <rogpeppe1> katco: fair enough, i pretty much concur
[17:33]  * rick_h_ grabs lunchables
[17:46]  * perrito666 learns that he must close the huge closing of his garage before the storm
[17:46] <katco> perrito666: stay safe down there
[17:46] <perrito666> I am safe, a bit wet though
[17:47] <perrito666> I have one of these things http://mla-d2-p.mlstatic.com/toldo-vertical-para-balcon-toldos-de-lona-para-balcones-716221-MLA20740247242_052016-F.jpg?square=false
[17:47] <perrito666> but its one big piece, a good 4m width
[17:47] <perrito666> and was half deployed when the wind storm started
[17:48] <perrito666> good news is I closed it before it was ripped apart
[17:48] <perrito666> bad news is I am soaking wet
[17:54] <perrito666> https://pbs.twimg.com/media/Ct8R1BPWAAAsAxb.jpg:large   <-- as you can see, the water level is at the first step of the house :p
[18:00] <natefinch> yikes
[18:00] <perrito666> no worries, I guessed this would happen so I built the house 1m above ground
[18:01] <perrito666> I am a bit surprised I still have internet though
[18:04] <natefinch> I live on a hill like 10m above the nearest low point.  If my house ever floods, water in the basement will be the least of my worries
[18:05] <perrito666> build an ark
[18:06] <natefinch> pretty much
[19:16]  * rick_h_ goes to get boy from school
[19:59] <perrito666> this city cannot stand rain http://www.lavoz.com.ar/ciudadanos/tormenta-en-cordoba-con-granizo-y-calles-anegadas?cx_level=catastrofe
[19:59] <perrito666> that is 2 hs of rain
[20:02] <natefinch> yikes
[20:03] <natefinch> gah, who decided that SetCookies(url, nil) would be a NOOP instead of just deleting all cookies?
[20:07] <alexisb> perrito666, you around ?
[20:08] <perrito666> alexisb: sorry here, Was trying to avoid a few tiles from flying from my roof
[20:08] <alexisb> o boy
[20:08] <alexisb> perrito666, do you need to be left alone?
[20:08] <perrito666> alexisb: no, I decided that its the roofers problem, not mine
[20:08] <natefinch> pretty sure there's only one answer to that question ;)
[20:09] <perrito666> I am not climbing to the roof in the middle of a storm
[20:09] <alexisb> hey perrito666 can you help tych0 with a question
[20:09] <perrito666> sure I can
[20:09] <alexisb> tych0, has a failed merge due to dep updates
[20:10] <tych0> see https://github.com/juju/juju/pull/6367
[20:10] <perrito666> tych0: sorry to hear that
[20:10] <tych0> i'm trying to untangle it now
[20:10] <alexisb> perrito666, do you know what he needs to do to move things forward?
[20:10] <tych0> but supposing i can't, what's the policy on adding new deps?
[20:11] <perrito666> tych0: afaik, adding new deps needs to go through the tech board
[20:11] <rick_h_> tych0: they need to be reviewed. what's the new dep?
[20:11]  * perrito666 reads the code
[20:11] <rick_h_> tych0: it needs license review/etc
[20:12] <perrito666> tych0: I dont see you adding a new dep
[20:12] <tych0> rick_h_: https://github.com/inconshreveable/log15
[20:13] <perrito666> tych0: where is that being added?
[20:14] <rick_h_> tych0: why that dep vs the standard logging tools in core?
[20:14] <rick_h_> tych0: that might cause issues
[20:14] <tych0> ok, i'm hearing "no" then :)
[20:14] <perrito666> that will cause issues most likely
[20:14] <perrito666> especially since we are doing the logs to mongo thing
[20:15] <rick_h_> perrito666: tych0 yea, and audit logs/etc. a new logging tool will make things complicated
[20:15] <perrito666> tych0: also, I dont see on your code where is this added, I am confused
[20:15] <tych0> perrito666: it's picked up in the github.com/lxc/lxd hash update
[20:16] <rick_h_> tych0: oic, a dep if a dep?
[20:16] <tych0> i didn't add it to juju explicitly, lxd added a dep mostly accidentally
[20:16] <perrito666> .... that is different
[20:16] <rick_h_> wallyworld: ^
[20:16] <rick_h_> wallyworld: has opinions and isbon the board and such
[20:18] <alexisb> rick_h_, ian is hopefully sleeping
[20:18] <perrito666> alexisb: nonsense, ian doesnt sleep
[20:19] <rick_h_> alexisb: ah, was thinking the evening call was closer than it is
[20:19]  * rick_h_ actually looks at clock
[20:20] <alexisb> do we need to add the dep of the dep directly to juju
[20:21] <tych0> (it's also possible that none of this is necessary if i can untangle it)
[20:21] <rick_h_> alexisb: i hooe it's not that bad, but it provavly still does need license review and such
[20:23] <alexisb> rick_h_, ack
[20:23] <babbageclunk> alexisb: Is Menno around today? His name's on the StatePool which is what's hanging onto States.
[20:23] <alexisb> babbageclunk, he is
[20:23] <babbageclunk> cool cool
[20:24] <alexisb> but it will be another 30 minute sbefore he start of day
[20:24] <alexisb> babbageclunk, ^^
[20:25] <perrito666> rick_h_: ideally lxc people should have checked that and if we are licence compatible with them that should be enough, if they did not they might have tainted the code and are in deep s***t
[20:25] <perrito666> tych0: do you have more info about that?
[20:26] <perrito666> babbageclunk: heavy summoning skills
[20:27] <babbageclunk> perrito666: ha ha
[20:27] <tych0> perrito666: about what?
[20:27] <rick_h_> perrito666: it's apache license  should be good
[20:27] <babbageclunk> tych0: I invoked menn0.
[20:27] <perrito666> tych0: what rick_h_ said
[20:27] <babbageclunk> tych0: duh ignore me
[20:27] <menn0> wat?
[20:28] <tych0> perrito666: yes, we check licenses :)
[20:28] <babbageclunk> menn0: morning!
[20:28] <menn0> babbageclunk: howdy :)
[20:28] <thumper> o/ babbageclunk
[20:28] <menn0> thumper: o/
[20:28] <babbageclunk> hey thumper
[20:29] <babbageclunk> menn0: Actually, just need to walk Alice's sister back to her hotel - can I grab you in 15 mins?
[20:29] <menn0> babbageclunk: sure
[20:30] <babbageclunk> menn0: For context, I'm chasing bug 1625774 and it turns out it's state.StatePool hanging onto State instances. So I think it needs to grow a way to release them for models that have gone away.
[20:30] <mup> Bug #1625774: memory leak after repeated model creation/destruction <eda> <oil> <oil-2.0> <uosci> <juju:In Progress by 2-xtian> <https://launchpad.net/bugs/1625774>
[20:31] <babbageclunk> menn0: So I wanted to run my plan past you since you seemed to know about it.
[20:31] <menn0> babbageclunk: I can see how statepool could be a problem in that case. let's talk when you're back.
[20:32] <babbageclunk> menn0: Cool thx
[20:36] <katco> ohai menn0
[20:36] <menn0> katco: howdy
[20:39] <natefinch> quick review anyone? https://github.com/juju/persistent-cookiejar/pull/16
[20:45] <katco> natefinch: i'll review yours if you give me a quick review as well: https://github.com/juju/juju/pull/6378
[20:51] <wallyworld> rick_h_: alexisb: tych0: our strong preference is to drop that other logging dep and migrate lxd to loggo (which now supports colour which was the issue why it wasn't used originally IIANM)
[20:51] <tych0> and syslog support
[20:51] <tych0> but that's a bigger job than we can really do :)
[20:51] <tych0> anyway, i think i've unwound it
[21:14] <babbageclunk> menn0: back (turns out Alice and Jayne needed to chat for a bit first). Now good?
[21:15] <menn0> babbageclunk: yep, give me a few secs, can you set up a hangout/bluejeans?
[21:15] <babbageclunk> don't know about bluejeans but hangout yes
[21:16] <babbageclunk> menn0: https://hangouts.google.com/hangouts/_/canonical.com/xtian
[21:22] <alexisb> wallyworld, failure!
[21:22]  * alexisb grumbles and adds a tag to all the bugs
[21:27] <wallyworld> alexisb: curtis will know better than me, we can ask him at standup
[21:35] <wallyworld> alexisb: release call?
[21:35] <alexisb> wallyworld, can you invite rick to the HO
[21:35] <wallyworld> yep
[21:36] <alexisb> wallyworld, running late
[21:43] <thumper> wallyworld: https://github.com/juju/juju/pull/6360 and https://github.com/juju/juju/pull/6372
[21:43] <wallyworld> thumper: will look after release call
[21:54] <katco> alexisb: rick_h_: are we still expecting a oct. 7th release?
[22:03] <wallyworld> thumper: we should probably check with marco or someone to ensure the --model-default syntax and behaviour is what they want
[22:03] <thumper> marcoceppi: ping
[22:06] <wallyworld> thumper: partly because andrew mentioned that in most cases, there could be an argument that what you use with --config could reasobably be in most cases a model default. but my argument was that that's fine, except for that one case where it's not
[22:06] <alexisb> katco no, the 13th
[22:06] <marcoceppi> thumper wallyworld pong
[22:07] <thumper> marcoceppi: bug 1628999 and the solution: https://github.com/juju/juju/pull/6360
[22:07] <mup> Bug #1628999: Openstack network selection is not passed from the controller to the models <canonical-is> <juju:In Progress by thumper> <https://launchpad.net/bugs/1628999>
[22:12] <marcoceppi> thumper: so --model-defaults is applied to controller as well?
[22:12] <thumper> marcoceppi: yes
[22:12] <marcoceppi> unless overridden in --config
[22:12] <thumper> yes
[22:12] <marcoceppi> thumper: well that sounds fucking fantastic
[22:12] <marcoceppi> thumper: helps with a problem we had in Best Buy deployment of setting both --config on bootstrap, then model-config later
[22:14] <marcoceppi> thumper: the only thing I think worth considering is changing --config to --model-config to match the rest of teh verbiage, but that's a nit (at best)
[22:31] <wallyworld> marcoceppi: you can also set up model defaults in clouds.yaml - this new work brings that capability directly to the CLI. there are use cases for both
[22:31] <wallyworld> marcoceppi: in clouds.yaml, that's the only way to set up region defaults
[22:32] <wallyworld> you still can't do that on the CLI
[22:47] <axw> rogpeppe1 natefinch: there's a bug open about cookies not being deleted on logout
[23:07] <wallyworld> axw: can you pop in a minute early to standup?