[00:03] <redir> perrito666: which bug?
[00:23] <redir> axw: go a minute?
[00:24] <redir> got even ^
[00:46] <mup> Bug #1611159 opened: model not successfully destroyed, and error on "juju list-models" <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1611159>
[00:49] <wallyworld> redir: did you want to catch up?
[00:49] <wallyworld> or you all good?
[00:49] <redir> wallyworld: sure, gimme 1 minute
[00:51] <redir> k
[00:51] <redir> standup?
[00:51] <redir> wallyworld: ^
[00:51] <wallyworld> ok
[01:11] <axw> redir: sorry, was afk
[01:29] <redir> np just off HO with wallyworld, going EoD but will prolly hit you up tomorrow.
[01:33]  * redir goes EoD
[02:05] <natefinch> axw: let me know when/if you'd like to spend a little time discussing the jsonschema stuff... otherwise, Í'll get to writing that proposal for it that I was supposed to do earlier.
[02:07] <axw> natefinch: happy to chat whenever, I think having a proposal first would be best though - something concrete we can discuss and mould
[02:08] <natefinch> axw: cool, will whip something up.
[02:28] <natefinch> weird, somehow chrome is linking zoom size to domain... if I have two different chrome windows open - if they're the same domain, they both increase or decrease in size. If they're different urls they change independently.
[02:29] <natefinch> which is super annoying when I want to have godoc open on two different monitors with vastly different pixel densities
[02:31]  * natefinch switches one window to gowalker.org ... oh the indignity
[02:56] <axw> anastasiamac: I have added this change to the PR, as it was failing with mongo 2.4: https://github.com/juju/juju/pull/5877/commits/ac97bffd0261f06df93255bed0a48ce35d1a5eb1
[02:56] <axw> anastasiamac: tested with mongo 2.4 and 2.6 (2.6 works with and without the change)
[02:56] <axw> 2.4 works only with the change
[02:57] <anastasiamac> axw: 2.4 does like "or" :( ..?..
[02:58] <axw> anastasiamac: not really sure what it doesn't like, seemed kosher to me
[02:58] <axw> anastasiamac: but they're bot happy with $ne even if hte field doesn't exist
[02:59] <anastasiamac> excellent \o/ lgtm
[03:33] <wallyworld> menn0: one thing that i miss when using juju from a snap is tab completion :-)
[03:33] <menn0> wallyworld: yeah true. I guess there might be some kind of workaround where juju completion workers for *.juju too but that's a bit hacky
[03:34] <menn0> s/workers/works/
[03:34] <wallyworld> i'm sure we can fix it somehow
[03:56] <mup> Bug #1580391 changed: juju/errors tests do not pass with go 1.6 <tech-debt> <unit-tests> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1580391>
[03:59] <axw> anastasiamac: PTAL at http://reviews.vapour.ws/r/5392/ when you have a moment
[03:59] <axw> anastasiamac: i.e. please see my replies
[03:59]  * axw goes for lunch
[04:08] <anastasiamac> wallyworld: could u do a quick ho in a-team call?
[04:08] <wallyworld> ok, give me a minute
[04:09] <anastasiamac> wallyworld: k. ping u r available
[04:12] <anastasiamac> when*
[04:13] <natefinch> my proposal for conversion to jsonschema for anyone interested: https://docs.google.com/document/d/137ejUl8qRAU98F4eg9jfO9L90fkkvRZ5aGINmaxKlM0/edit#
[04:14] <wallyworld> anastasiamac: there now
[04:53] <natefinch> axw: I'm a little stuck on the actual implementation of getting the non-jsonschema values into useful places in our code... without terrible hacks.  Any ideas you have there are welcome.  I'm going to go to bed.. will check my email overnight, so when william gets up, you guys figure out a time to talk, and I'll get up and talk.
[04:53] <natefinch> axw: unless you think it would be useful to talk now for a bit, without william
[04:54] <axw> natefinch: I need to think about it some more first I think
[04:54] <axw> ... think think
[04:54] <natefinch> axw: no problem. I've gone around and around on it.  I welcome more brains being applied to the problem.
[06:52] <fwereade> axw, menn0, jam: shall we talk schemas?
[06:52] <axw> fwereade: happy to, but shouldn't we wait for natefinch?
[06:54] <fwereade> axw, perhaps, but I'm not sure when he'll arrive; and several of us look likely to be here; and I got the impression that you were pretty happy with the proposal in the doc
[06:56] <axw> fwereade: okey dokey, I can chat with natefinch about it tomorrow if need be
[07:01] <fwereade> axw, so, well, *I'm* happy with the proposal in general, and have no attachment to juju/gojsonschema if a different library gives us something we need
[07:01] <axw> fwereade: agreed, as long as we end up with one or the other
[07:01] <axw> (and not both)
[07:02] <fwereade> axw, right, yes, we should replace the action bits if we switch
[07:03] <axw> fwereade: do you have an opinion on whether we should retain the EnvVar bits?
[07:03] <axw> or Group?
[07:03] <axw> I'm *fairly* sure Group is redundant now
[07:04] <axw> anything that's "juju defined" shouldn't really be in the model config anyway
[07:05]  * axw just waits for doc comments to roll in
[07:05] <fwereade> axw, I'm very happy to drop group
[07:05] <fwereade> axw, and I am a bit snooty about the env var bits
[07:06] <fwereade> axw, do you think we can cast them as interactive-only? in which add-credential is considered interactive?
[07:06] <axw> fwereade: that's all they would be used for
[07:06] <fwereade> axw, it's the "magically grab them from the env" stuff that I always hated
[07:06] <axw> fwereade: autoload-credentials and add-credential
[07:07] <fwereade> axw, yeah
[07:07] <axw> fwereade: yeah, that should definitely not happen post-add
[07:08] <axw> fwereade: I guess I can live with it. but it feels kinda awkward that there will be some credential detection methods that just won't work, like looking for files
[07:08] <fwereade> axw, (if I were to shoot for the moon I'd prefer a schema that didn't allow defaults, tbh, it's just another way to handwave the difference between ingestion and validation)
[07:08] <fwereade> axw, well, we *could* define file paths as well...
[07:09] <fwereade> axw, not that I really want to
[07:10] <fwereade> axw, but the *important* thing in my mind is not to have the *validation* bits go charging off through the env, filesystem, network, whatever, just to find some value to wedge in and make things look right
[07:10]  * axw nods
[07:11] <axw> fwereade: these defaults should only apply when creating. providers can/should take care of adding defaults as necessary on upgrade
[07:12] <fwereade> axw, yeah, which is why I'd like to keep default-insertion out of validation too, if we can
[07:12] <fwereade> axw, I just don't see a path that doesn't involve us writing our own schema thing
[07:14] <axw> fwereade: I mentioned this in London: https://github.com/juju/juju/blob/master/environs/interface.go#L121
[07:14] <fwereade> axw, we could just tell people not to specify json-schema defaults (ha), but if people *did* eschew them then it is just the same need as the env/file cases: if not specified, get a value <like this>
[07:14] <axw> fwereade: isn't it enough to just implement that in each provider, and stop modifying config in ValidateConfig?
[07:14] <axw> fwereade: ah I see what you mean, JSON-Schema validation will insert the defaults?
[07:14] <fwereade> axw, yeah
[07:16] <fwereade> axw, (regardless, I'm +1 on breaking up the ValidateConfig god-methods any way we can)
[07:16] <axw> fwereade: I would think we would just ignore the defaults post-creation
[07:16] <axw> same for env vars
[07:17] <fwereade> axw, yeah -- that's easy if we can clearly separate them -- but I don't see how we can avoid default-insertion basically every time we validate a config
[07:27] <anastasiamac> axw: wallyworld: menn0: any suggestions? or workaround for this? https://bugs.launchpad.net/juju-core/+bug/1611159
[07:27] <mup> Bug #1611159: model not successfully destroyed, and error on "juju list-models" <oil> <oil-2.0> <juju-core:Triaged> <https://launchpad.net/bugs/1611159>
[07:30] <axw> anastasiamac: nope, sorry
[07:32] <anastasiamac> axw: tyvm!
[07:35] <mup> Bug #1449044 changed: juju add-unit resets AWS security groups <add-unit> <ec2-provider> <security> <juju-core:Invalid> <https://launchpad.net/bugs/1449044>
[07:35] <mup> Bug #1548564 changed: restart failed upgrade <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1548564>
[08:08] <mup> Bug #1517499 changed: i/o timeout on bundle deployment <juju-core:Triaged> <https://launchpad.net/bugs/1517499>
[08:28] <rogpeppe> axw: hiya
[08:28] <axw> rogpeppe: hey
[08:28] <rogpeppe> axw: i just ran across mongotest.DialTimeout...
[08:28] <rogpeppe> axw: it was causing a test to take 5 minutes
[08:29] <rogpeppe> axw: is 5 minutes really necessary as the dial timeout?
[08:29] <rogpeppe> axw: it seems like quite a long time...
[08:30] <axw> rogpeppe: per the comment, I think so. mostly for the unit test merge jobs, because the machines thrash quite a bit after starting up
[08:30] <axw> rogpeppe: why is the test taking 5 minutes? sounds like an error in the test?
[08:30] <rogpeppe> axw: well, arguably it's a problem with state.State.Close
[08:31] <rogpeppe> axw: or with the whole strategy used to manage mgo.Sessions in juju
[08:31] <rogpeppe> axw: it's an interesting thing though
[08:31] <rogpeppe> axw: the test that takes 5 minutes is apiserver.serverSuite.TestNewServerDoesNotAccessState
[08:32] <rogpeppe> axw: well... 7/10 times it takes 0.08 seconds; the other times it takes 5 minutes
[08:33] <rogpeppe> axw: it depends how far the state workers get before the state is shut down
[08:33] <rogpeppe> axw: the problem is that the session is copied (with Session.Copy) whenever a mongo operation is performed
[08:34] <rogpeppe> axw: and if the mongo is unavailable (as it is in this test - deliberately), the mongo operation will block until a connection can be made (or it times out)
[08:35] <rogpeppe> axw: and while worker is trying to perform the operation, it cannot be stopped
[08:35] <rogpeppe> axw: and the State cannot be closed until all the workers have been stopped
[08:36] <mup> Bug # opened: 1611267, 1611269, 1611271, 1611273, 1611275
[08:36] <axw> rogpeppe: hrm :(  I don't suppose we can tell when the workers are idle?
[08:37] <rogpeppe> axw: how would that help?
[08:37] <axw> rogpeppe: don't close the proxy until nothing should be accessing it
[08:37] <axw> rogpeppe: which is... never? the transaction watcher will always be active won't it
[08:40] <rogpeppe> axw: one mo, in a call
[08:43] <babbageclunk> fwereade: so I was bugging dimitern about api stuff yesterday evening, but it got late.
[08:47] <babbageclunk> fwereade: (ping)
[08:56] <axw> mgz: ping? would now be a good time for me to run some tests against finfolk?
[08:59] <axw> scratch that, will be back later
[09:01] <fwereade> babbageclunk, hey, sorry
[09:02] <fwereade> babbageclunk, I did do a little review this am, not sure if that's relevant?
[09:02] <babbageclunk> fwereade: no worries - just noticed the review, going through it now.
[09:03] <babbageclunk> fwereade: I'm not sure what you mean by "wrap the txn-building in a func that guarantees a sensible error"
[09:03] <babbageclunk> fwereade: You mean something other than onAbort?
[09:03] <fwereade> babbageclunk, ah sorry
[09:04] <fwereade> babbageclunk, there's run, which takes a `func(attempt int) error`
[09:04] <babbageclunk> (I mean, I understand the problem with letting ErrAborted get out, I'll fix that, just wondering if there's a deeper fix than chucking an onAbort around it.)
[09:04] <fwereade> babbageclunk, which will get called again if the txn is aborted
[09:05] <fwereade> babbageclunk, and if you structure the buildTxn func nicely, you have OAOO place where you check state, and can construct the associated assert at the same time
[09:05] <rogpeppe> axw: yes, the transaction watcher (and the pinger) will always be active
[09:06] <fwereade> babbageclunk, so when it fails it just gets called again and you never have to worry about, e.g., having two places to complain about non-dead machines that produce slightly different error messages
[09:06] <babbageclunk> fwereade: Ah, so run can recheck the preconditions each time to work out which one failed and abort with a nicer error?
[09:06] <fwereade> babbageclunk, exactly
[09:07] <fwereade> babbageclunk, https://github.com/juju/juju/wiki/mgo-txn-example
[09:08] <babbageclunk> fwereade: Right, so if I change Remove, MarkForRemoval and CompleteMachineRemovals to use those then places that add preconditions get a chance to return the right kind of error at the same time as they add the assertion.
[09:08] <fwereade> babbageclunk, yes please :)
[09:09] <babbageclunk> fwereade: Ok, thanks, I'll do that.
[09:09] <fwereade> babbageclunk, incidentally, if run failed enough you'll get ErrExcessiveContention out, which hsa text something "state changing too fast, try again soon"
[09:10] <fwereade> babbageclunk, but what it almost invariably *actually* means is "some developer messed up the correspondence between state checks and txn asserts"
[09:10] <babbageclunk> fwereade: Ah, right.
[09:11] <fwereade> babbageclunk, but that's generally easier to understand and fix than when an onAbort triggers in response to ops defined somewhere else, because that means you often just get the *wrong* error message
[09:12] <babbageclunk> fwereade: ok
[09:12] <fwereade> babbageclunk, (because someone changed the ops and the client is still assuming there's only one way for the ops to trigger abort)
[09:22] <rogpeppe> axw: i think the real solution is not to Copy the session on every database operation
[09:23] <rogpeppe> axw: but that's a significant change
[11:03] <rogpeppe> a small juju-core code cleanup: https://github.com/juju/juju/pull/5954
[11:04] <rogpeppe> reviews appreciated. fwereade, you'll appreciate this change
[11:16] <fwereade> rogpeppe, very nice, LGTM with a trivial
[11:16] <rogpeppe> fwereade: tvm
[11:32] <rogpeppe> fwereade: that trivial is not *quite* so trivial. AFAICS there is no need for that remoteError value at all. it couldn't work even if the call did return an error. and i don't see why that code is invoking FacadeCall directly rather than using an api package.
[11:32] <rogpeppe> fwereade: ha, that *is* the api package :)
[11:38] <babbageclunk> fwereade: Can you take a look at http://reviews.vapour.ws/r/5365/diff/# again? Everything's using buildTxn/run now, but I'm worried I might be doing too much in completeMachineRemovalsOps.
[11:46] <natefinch> axw, fwereade: morning... anything you need to talk about?
[11:53]  * babbageclunk goes for a run.
[11:57] <wallyworld> looking for a review of a small CLI change for stakeholders to go into beta15 http://reviews.vapour.ws/r/5399/
[11:59] <axw> natefinch: I think you're good to get deeper into it - not sure that there's anything to discuss. there's a few comments on the doc. I'm here for a bit but will be disappearing soon - will be more free to chat tomorrow if you're uncertain about anything
[12:01] <axw> mgz sinzui balloons: any of you about?
[12:03] <natefinch> axw: no problem.  The comments seems clear, and I think we'll be good to go with it.
[12:04] <axw> natefinch: did you look at replacing gojsonschema with the other one already?
[12:04] <axw> for actions
[12:04] <axw> I suspect it'd be trivial, I don't think we do anything particularly complicated there
[12:39] <fwereade> babbageclunk, I think that's all fine
[12:40] <fwereade> babbageclunk, completeMachineRemovalsOps is quite big but it doesn't seriously upset me
[13:06] <babbageclunk> fwereade: great! Thanks.
[13:06] <sinzui> axw: I am now about
[13:23] <sinzui> fwereade: babbageclunk What is the sig we send to jujud to stop and uninistall?
[13:26] <babbageclunk> sinzui: I think it's a file that gets left in a directory somewhere, but I haven't found the bit of code yet.
[13:28] <sinzui> babbageclunk: touch uninstall-agent, then SIGABRT?
[13:31] <mup> Bug #1506225 changed: Failed bootstrap does not clean up failed environment w/o --force and error message is unhelpful <bootstrap> <destroy-environment> <jujuqa> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1506225>
[13:31] <mup> Bug #1611379 opened: api/backups: restore code needs improvement <juju-core:New> <https://launchpad.net/bugs/1611379>
[13:31] <babbageclunk> sinzui: not sure about the signal, but that sounds plausible
[13:33] <sinzui> babbageclunk: I am reading notes I left to myself more than a year ago. Plenty of time for juju to change
[13:35] <babbageclunk> sinzui: True that. I'm just trying it out.
[13:58] <wallyworld> natefinch: I think you're ocr? can i get a small review for a stakeholder request http://reviews.vapour.ws/r/5399/
[14:00] <natefinch> wallyworld: will do
[14:00] <wallyworld> ta
[14:03] <rick_h_> fwereade: standup ping
[14:10] <mup> Bug #1516150 changed: LXC containers getting HA VIP addresses after reboot <canonical-bootstack> <juju-reboot> <juju-core:Invalid by dimitern> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1516150>
[14:12] <tvansteenburgh> Hi everyone...in juju1 I could upload a local charm to the apiserver, then immediately deploy it via the Deploy api using the 'local:...' url returned by the upload. That doesn't work with juju2 - I get a 'charm not found' error from the api when I Deploy, and the Charms.List api doesn't show my charm. What am I missing?
[14:14] <natefinch> mgz, sinzui: for https://bugs.launchpad.net/bugs/1604474, can you leave up a windows VM that CI tried to deploy to?  I wasted a lot of time trying to get azure to start a windows machine yesterday
[14:14] <mup> Bug #1604474: Juju 2.0-beta12  userdata execution fails on Windows <azure-provider> <ci> <juju2.0> <oil> <oil-2.0> <regression> <vpil> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1604474>
[14:15] <sinzui> natefinch: sure, we just add --keep-env to the job, or just run the test locally with the same flag
[14:16] <babbageclunk> Hi tvansteenburgh - you can deploy a local charm just by specifying the charm directory in juju 2. Or do you want to upload it specifically?
[14:16] <mgz> there's then a slight wiggle making sure you get the env details to actually inspect that machine
[14:17] <mgz> but it's doable
[14:17] <tvansteenburgh> babbageclunk: yeah but only via the cli right? i'm using the api
[14:18] <babbageclunk> tvansteenburgh: oh, I see. Sorry, not sure.
[14:26] <babbageclunk> tvansteenburgh: Reading the code it sounds like what you're describing should still work. Posting to the /charms endpoint gives you a url back?
[14:26] <tvansteenburgh> babbageclunk: yeah, a local: url
[14:26] <tvansteenburgh> but then when i try to deploy using that url, i get charm not fourd
[14:26] <tvansteenburgh> found
[14:27] <tvansteenburgh> and if i call the Charms.List api, my charm isn't in the list
[14:27] <natefinch> axw: you're not actually around are you?
[14:30] <sinzui> natefinch: I run the test scripts locally all the time. I think this will run the windows charms in azure and keep the env about after your send a control-c https://pastebin.canonical.com/162690/
[14:30] <babbageclunk> tvansteenburgh: Sorry, I'm not familiar with that part of the code at all.
[14:30] <rick_h_> tvansteenburgh: katco and wallyworld were just talking about some work needed around that code
[14:31] <rick_h_> tvansteenburgh: I defer to katco on the exact work required/possible bug there but funny that you bring it up right as they're investigating that area of things
[14:31] <mup> Bug #1516150 opened: LXC containers getting HA VIP addresses after reboot <canonical-bootstack> <juju-reboot> <juju-core:In Progress by dimitern> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1516150>
[14:31] <mup> Bug #1611391 opened: model migration fails while removing from original controller <model-migration> <juju-core:Triaged> <https://launchpad.net/bugs/1611391>
[14:32] <natefinch> sinzui: thanks for the detailed example
[14:33] <babbageclunk> fwereade: around?
[14:33] <katco> tvansteenburgh: i'll ping you when i open that bug. from my discussions with wallyworld though, i would think that what you're doing should work (i.e. deploying from the api). there is likely something wrong with how we're caching charm archives
[14:33] <fwereade> babbageclunk, heyhey
[14:34] <babbageclunk> fwereade: migrations!
[14:34] <babbageclunk> fwereade: I think I should migrate machineRemovals records?
[14:35] <fwereade> babbageclunk, I think you should add a note saying they're not migrated because we won't migrate when we're in an unstable state
[14:35] <babbageclunk> fwereade: I *could* treat them like cleanups, but then migration would be blocked by the provider?
[14:36] <fwereade> babbageclunk, and double-check with menn0 that "any entity in any state other than alive" == unstable (from migrations perspective)
[14:36] <babbageclunk> fwereade: Ok, so I should treat them like cleanups then - don't let migration happen while there are machineRemovals.
[14:36] <babbageclunk> fwereade: ok
[14:36] <fwereade> babbageclunk, yeah -- and just sync up with menn0, I am pretty certain he's about to start on that side of it
[14:36] <tvansteenburgh> katco: thanks! really eager to get past this - trying to help a customer who is stuck here
[14:37] <katco> tvansteenburgh: ack. other than asap, what kind of timeframe would be ideal for you?
[14:37] <babbageclunk> fwereade: ok cool
[14:38] <tvansteenburgh> katco: yesterday? :)
[14:38] <tvansteenburgh> katco: i know you guys are swamped, whatever you can do
[14:38] <katco> tvansteenburgh: not a problem. in this timeline you'll have to wait 40y or so while i invent time travel, but the alternate-you will have it yesterday :)
[14:39] <tvansteenburgh> haha!
[14:39] <niedbalski> dimitern, re: 1610037, I just uploaded the cloud init logs for both container/host.
[14:39] <balloons> jam, thanks for opening the conversation again about sharing configs for snappy
[14:54] <natefinch> katco: ship it with one minor question
[14:54] <katco> natefinch: tal, ty
[14:55] <katco> natefinch: ah, yeah the reason i do that is so that the formatting is completely left to the errors package. basically for consistency
[14:56] <katco> natefinch: e.g. if the errors package ever changes its string formatting (or we use different errors), this would still format correctly in the log
[15:01] <mup> Bug #1611404 opened: failed migration leaves model unkillable <model-migration> <juju-core:Triaged> <https://launchpad.net/bugs/1611404>
[15:01] <natefinch> katco: It was a little confusing to me.  My first thought was that you were accidentally not saving the annotated error to a local variable.... took me a second to realize you were probably doing it on purpose to let it do the formatting for you.
[15:17] <balloons> I believe my PR's are still invisible, so does someone mind having a look at https://github.com/juju/juju/pull/5956 today? It's upstreaming the snapcraft.yaml file
[15:29] <dimitern> niedbalski: thanks, I'll have a look
[15:32] <dimitern> niedbalski: /var/log/cloud-init.log from the container is not very useful I'm afraid - I'd like to see /var/log/cloud-init-output.log from the container please, and thanks for adding the machine's /v/l/c-i-o.log
[15:34] <niedbalski> dimitern, ok. cloud-init and cloud-init-output http://pastebin.ubuntu.com/22811404/
[15:36] <dimitern> niedbalski: is there nothing else in the container's /var/log/cloud-init-output.log? That's very strange..
[15:40]  * rick_h_ grabs lunchables
[15:41] <niedbalski> dimitern, yep, it seems not complete. btw; http://pastebin.ubuntu.com/22811951/ this is the hypervisor.
[15:50] <katco> natefinch: hey you're OCR, can i get a quick ship it? +8L: http://reviews.vapour.ws/r/5401/
[16:01] <mup> Bug #1611427 opened: Code which performs retries should consolidate on github.com/juju/retry <tech-debt> <juju-core:New> <https://launchpad.net/bugs/1611427>
[16:07] <rick_h_> katco: I'd feel free to self-review that one and just run with it
[16:07] <katco> rick_h_: k
[16:07] <katco> rick_h_: wondering... in the time it took you to type that message, could you have clicked "Ship It!" ;)
[16:08] <mgz> katco: I'm happy to stamp a comment-only change :)
[16:08] <rick_h_> katco: yes, but then I'd not have said that in the future you should be able to self-review trivial such as that
[16:08] <rick_h_> katco: so figured I'd type that vs both :)
[16:08] <katco> hehe
[16:08] <katco> mgz: ta
[16:08] <babbageclunk> fwereade: still around?
[16:24] <fwereade> babbageclunk, heyhey
[16:24] <babbageclunk> fwereade: hey, so now I'm trying to understand auth in the API.
[16:26] <fwereade> babbageclunk, basically, your facade is passed a facade.Authorizer that tells you who made the (already-authenticated) connection to your facade
[16:27] <fwereade> babbageclunk, often you can just return common.ErrPerm from the facade ctor when you know that the caller isn't allowed -- e.g. a user connecting to a facade intended for a worker
[16:27] <babbageclunk> fwereade: Ok, so there's the AuthModelManager() method, that is only true for something that's running on the controller? I'm using that in the ctor.
[16:28] <babbageclunk> fwereade: You mentioned in the review that AllMachineRemovals should take entities and check that it's the model tag.
[16:29] <fwereade> babbageclunk, ah yes
[16:29] <fwereade> babbageclunk, right, the AuthModelManager is the right one for the ctor
[16:29] <babbageclunk> Do I do that just by checking that the type of the tag is model, and then calling authorizer.AuthOwner(modelTag)?
[16:30] <fwereade> babbageclunk, and... it is sort of ludicrous that the Authorizer *doesn't* have a ModelUUID representing when model it's connected to, but you*can* get that out of state
[16:30] <fwereade> babbageclunk, AuthOwner isn't quite right
[16:30] <babbageclunk> fwereade: Maybe HasPermission?
[16:30] <fwereade> 	// AuthOwner returns true if tag == .GetAuthTag(). Doesn't need
[16:30] <fwereade> 	// to be on this interface, should be a utility fun if anything.
[16:31] <fwereade> GetAuthTag will be the machine that connected
[16:31] <fwereade> babbageclunk, I would (1) reject non-model-tags, and (2) reject model tags that don't match the state's modeluuid
[16:32] <fwereade> babbageclunk, for bonus points, a ConnectedModel or something on Authorizer would be much nicer than having to ask state
[16:32] <babbageclunk> fwereade: Ah, right - not really looking at auth at all.
[16:32] <fwereade> babbageclunk, yeah, but I think conceptually that info *should* be coming from the authorizer
[16:32] <babbageclunk> fwereade: yeah, that does sound nicer.
[16:32] <fwereade> babbageclunk, it's just that it hasn't really evolved much at all over the years
[16:32] <babbageclunk> Ok, I'll try to thread that through.
[16:33] <fwereade> babbageclunk, <3
[16:34] <babbageclunk> And should it accept multiple models? Presumably that would always be denied - the agent's connected to one model specifically, right?
[16:34] <fwereade> babbageclunk, connections to a controller muddy the waters a little bit
[16:35] <fwereade> babbageclunk, I would probably go no further than reporting *what* model the connection is for, and leave the interpretation of that in the facade, at least to begin with
[16:36] <babbageclunk> fwereade: Right, sorry. I meant that check in AllMachineRemovals facade method.
[16:37] <fwereade> babbageclunk, ah, got you, yeah -- the connection is specifically for just one model
[16:37] <fwereade> babbageclunk, so constructing a canAccess AuthFunc and passing that on to the rest of your logic is probably ideal
[16:37] <babbageclunk> fwereade: Cool. I'll do the same check on WatchMachineRemovals.
[16:38] <fwereade> babbageclunk, that's a `func(names.Tag) bool` iirc
[16:38] <fwereade> babbageclunk, perfect
[16:38] <babbageclunk> fwereade: And then the CompleteMachineRemovals and GetNetworkConfig ones already get lists of machines - so I do the checks against them?
[16:39] <fwereade> babbageclunk, I think they're implicitly authorized by the AuthModelManager -- whereas if it were just some machine agent, we'd want to restrict info about any *other* machines
[16:40] <fwereade> babbageclunk, but this is very much situational IYSWIM
[16:40] <fwereade> babbageclunk, deciding what gets access to what is pretty much the facade's primary job
[16:40] <fwereade> babbageclunk, we should generally be expecting to write the logic in terms of an AuthFunc chosen by the facade, or something very similar
[16:41] <babbageclunk> fwereade: Yeah, I think I see that.
[16:41] <fwereade> babbageclunk, so that when we end up wanting to do the same stuff elsewhere, it's easy to pull that logic into a common/whatever type and parameterise it with an AuthFunc on facade construction
[16:42] <fwereade> babbageclunk, e.g. apiserver/lifeflag, where I was very happy to just grab a couple of existing common services and specialise them
[16:42] <babbageclunk> fwereade: Oh, right - I see some things like that in the machiner.
[16:43] <babbageclunk> fwereade: Actually, something related that I was going to ask you about this morning but then got distracted by your review...
[16:43] <fwereade> babbageclunk, yeah -- it's not all super-elegant but at least it separates the responsibilities, or lets us chalk out a path towards doing so
[16:43] <fwereade> babbageclunk, oh yes?
[16:44] <babbageclunk> fwereade: ProvisionerAPI.GetContainerInterfaceInfo is very close to what I want for getting the network config for machines.
[16:44] <fwereade> babbageclunk, then that's a great candidate for just that sort of extraction :)
[16:45] <babbageclunk> fwereade: Right, that was what I was thinking/wondering. It's slightly different because it only uses the first address on the device though.
[16:46] <fwereade> babbageclunk, hmm... I'm not sure I have the context for a firm judgment there
[16:46] <fwereade> babbageclunk, if there's no drawback to sending extra info to the one that doesn't need all of it, might still be sensible
[16:47] <babbageclunk> fwereade: ok, I'll think about that.
[16:47] <fwereade> babbageclunk, there might be something genuinely in common, but there might not ;)
[16:47] <fwereade> (not always easy to tell)
[16:48] <babbageclunk> fwereade: might be better to just do my one and then see what's common, rather than the other way around.
[16:48] <fwereade> babbageclunk, ack
[16:48] <babbageclunk> fwereade: ok, great - thanks! Sorry to bug you late!
[16:48] <fwereade> babbageclunk, http://thecodelesscode.com/case/233
[16:48] <fwereade> babbageclunk, no worries!
[16:50] <babbageclunk> fwereade: cute
[16:51] <fwereade> babbageclunk, I like that site a lot, there's some good stuff there
[16:51] <fwereade> babbageclunk, occasionally self-contradictory in the very best possible way
[16:52] <mup> Bug #1611453 opened: manual-provider: systemd services left behind <ci> <kill-controller> <manual-provider> <regression> <systemd> <xenial> <juju-core:Triaged> <https://launchpad.net/bugs/1611453>
[16:56] <natefinch> katco: sorry, was out.  Looks like you got it though.
[16:59] <rick_h_> perrito666: got a sec?
[16:59] <perrito666> rick_h_: certainly
[17:00] <rick_h_> perrito666: so pulled that pr down and built a juju with it and verified my local show.go has your patch in it
[17:00] <rick_h_> perrito666: but it's not working for me when I run $ djuju show-model controller
[17:00] <rick_h_> perrito666: I get the output for model "default"
[17:01] <perrito666> rick_h_: mm, something might have changed since I PRd that, tx for the test, ill re-review it and fix it and re-propose it
[17:01] <rick_h_> perrito666: rgr ok ty
[17:01] <rick_h_> perrito666: and let me know if I'm doing something wrong, only built things a couple of times so problem coould be on my end without a doubt
[17:02] <perrito666> rick_h_: well the way you describe it, the issue is most likely on my end :)
[17:02] <rick_h_> perrito666: k, let me know if I can be of any service
[17:58] <mup> Bug #1611463 opened: GUI: test_upgrade_gui failed KeyError: 'bootstrap-config' <ci> <juju-gui> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1611463>
[19:03] <katco> sinzui: is there any easy way to check if lp:1596462 is fixed in master?
[19:09] <sinzui> katco: Lets be optimistc and close it. http://reports.vapour.ws/releases/issue/5770f063749a56783c8c36fc shows we saw it once a week, but we haven't seen it in 4 weeks
[19:09] <katco> sinzui: well, i just landed something today that (in theory) should fix it
[19:09] <katco> sinzui: (last comment in the bug)
[19:10] <balloons> rogpeppe, so I started making the changes to migrate everything from launchpad.net/gnuflag to github.com/juju/gnuflag. I fixed everything in juju/juju, but I see there's imports in several other repos. What's the best way to coordinate this change?
[19:11] <sinzui> katco: I suspect this bug was caused by an overloaded server. We restructured tests to avoid cpu starvation
[19:11] <katco> sinzui: ah. well in any case, even under load the bug should be fixed
[19:11] <sinzui> why was it only seen on Fridays?
[19:15] <katco> sinzui: not sure if that was a question for me. i don't know. weekend test run?
[19:17] <sinzui> katco: jsut a public musing. CI doesn't schedule extra work on late Thursday early Friday. The pattern is odd
[19:17] <mup> Bug #1609494 changed: grant-revoke: reenabled users missing from list-users <ci> <grant> <regression> <user> <juju-ci-tools:Invalid> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1609494>
[20:14] <balloons> natefinch, can I get you to review something for me / make sure I've done things properly?
[20:16] <natefinch> balloons: sure
[20:16] <balloons> natefinch, https://github.com/juju/juju/pull/5956. I'm attempting to land snapcraft.yaml and updating 2 depends off bzr to git instead. tomb and gnuflag
[20:20] <natefinch> balloons: wow, those were our last two launchpad dependencies, huh?
[20:21]  * perrito666 cries in spanish
[20:23] <natefinch> balloons: for romulus, you updated tomb dependencies.tsv to use tomb.v1... but didn't update any dependencies that use tomb.  that'll probably break something in CI.
[20:23] <balloons> natefinch, yes indeed
[20:23] <natefinch> (I assume gnuflag doesn't use tomb)
[20:24] <balloons> they are the last. I was trying to find all the places it was used, but juju/* is quite big obviously. I also assumed perhaps I need to land the supporting PR's before, the main PR in juju/juju
[20:25] <natefinch> yes, you do need to land those first, and then update juju's dependencies to point to the new commits for those repos
[20:25] <natefinch> it's just a giant PITA
[20:27] <balloons> natefinch, so i'm not sure what you mean by the romulus comment
[20:27] <perrito666> brb
[20:28] <natefinch> balloons: you changed what dependencies.tsv says the repo depends on... but you didn't actually change any code that imports launchpad.net/tomb to import the github repo
[20:29] <natefinch> balloons: oh, I think I see what it is.  utils has the dependency on tomb, and romulus uses utils
[20:30] <balloons> natefinch, right. utils has the inverse problem
[20:30] <natefinch> actually... romulus also depends on juju/juju ... which is just a horrible idea
[20:30] <balloons> i updated gnuflag, but it only has tomb underneath
[20:31] <natefinch> wait, gnuflag depends on tomb?
[20:31] <natefinch> oh nevermind
[20:31] <balloons> natefinch, no sorry.. I meant utils only uses tomb, not gnuflag, but depends on romulus that does use gnuflag
[20:31] <natefinch> I misunderstood
[20:31] <balloons> I'm not sure why those sub-repos also have dependencies.tsv..
[20:32] <natefinch> they really shouldn't.  The only thing that should have a dependencies.tsv is a repo that makes a binary
[20:34] <perrito666> natefinch: that is a strong statement
[20:35] <balloons> natefinch, ok, well do we feel I've gotten all the places that need updated?
[20:35] <natefinch> anyway, my point is.... you can't change a dependency path without changing either some codein the repo that uses it, or changing another dependency that had code changed to use it.
[20:35] <natefinch> this change can't be correct: https://github.com/juju/utils/pull/230/files
[20:35] <natefinch> nothing is using the new gnuflag path
[20:36] <natefinch> which means something is likely using the old gnuflag path, which means this is wrong
[20:36] <balloons> natefinch, why is it in the dependencies.tsv to begin with then? nothing under utils uses it
[20:37] <balloons> that was I assumed the utils <-> romulus relationship.. I gues
[20:37] <natefinch> relationships are never two way.  always only one way
[20:37] <natefinch> something utils imports is using gnuflag
[20:38] <balloons> I can find no other uses anywhere
[20:38] <balloons> does godeps actually take out dependencies that aren't needed?
[20:38] <balloons> I felt like juju/juju has stuff it might not use for instance; not that I've explored it
[20:38] <natefinch> if you run godeps ./... it'll print out the actual dependencies
[20:39] <natefinch> or godeps -t ./... to include testing dependencies
[20:39] <perrito666> natefinch: what is wrong with the link you passed?
[20:39] <natefinch> and that includes gnuflag somehow..
[20:40] <natefinch> perrito666: gnuflag was updated in dependencies.tsv, but no imports that actually referenced the old one were updated to use the new one (and no other dependencies changed)
[20:40] <perrito666> ah I missed thatr
[20:41] <natefinch> ahh, I see what it is
[20:41] <natefinch> juju/cmd still references the old one.  You need to check in that change, then update utils to reference the new one
[20:42] <balloons> that makes sense to me
[20:42] <natefinch> (to reference the new juju/cmd)
[20:42] <perrito666> in github.com/juju there are 237 imports of gnuflags
[20:42] <natefinch> this is why it's a PITA
[20:42] <natefinch> perrito666: almost all of those are in juju/juju
[20:42] <perrito666> sorry 153
[20:43] <natefinch> so it's a rolling update - juju/cmd is the leaf, check that in, update juju/utils to reference the new juju/cmd, check that in, update juju/romulus to reference the new juju/utils. check that in
[20:43] <natefinch> althouhg romulus depends on juju/juju, so you'll need to do that before romulus
[20:43] <perrito666> yup, ironically utils only deppends on gnuflags but indeed has no dependency as you said, then there is terms-client, romulus and cmd
[20:44] <natefinch> and this is why only the top level should use dependencies.tsv.... because it's only the top level one that actually matters anyway.  It's not like juju runs the dependnecies.tsv in juju/utils
[20:44] <perrito666> natefinch: nah, that is bogus
[20:44] <perrito666> if you create a library, it makes perfect sense that the library has dependencies.tsv
[20:45] <perrito666> godeps is just an imperfect method, it should know how to handle cascades
[20:45] <natefinch> I guess you may need dependencies.tsv to build juju/utils to be even able to check stuff in
[20:45] <perrito666> natefinch: a library has the same issue than a binary
[20:45] <perrito666> we need vendoring :p
[20:46] <natefinch> vendoring is good, but it doesn't fix this problem.  utils would still need to vendor its own dependencies if master of the things it depends on won't build together
[20:47] <natefinch> then you're just updating the vendor directories for each... not much better
[20:47] <balloons> natefinch, ok, I'll start with $$merge$$ on juju/cmd
[20:52] <natefinch> balloons: there, I gave you provisional LGTMs.  Just roll in the updates to dependencies.tsv as things get committed.
[20:52] <natefinch> balloons: and then feel free to land
[20:52] <balloons> on it
[20:54] <natefinch> balloons: why are you pushing these dependency updates, btw?  I mean, it's a good idea, but why you why now?
[20:54] <balloons> natefinch, ahh. I need it to make building a snap in launchpad possible
[20:54] <balloons> natefinch, there's a bug using bzr to pull depends inside the builder. This 'fixes' that bug
[20:55] <natefinch> balloons: I thought it might be that.  It's kind of nice, actually... the go tool complains when you tell it to download stuff from launchpad, since it's over http
[20:56] <balloons> right.. this cleans up things for everyone, and gets rid of the bzr dependency as well
[20:56] <balloons> so why not :-)
[20:56]  * natefinch tries not to cheer too loudly.
[20:57]  * katco quietly mentions that all our ci stuff is hosted in bzr
[20:58]  * balloons notes it should not only be in git, but in the same repo(s) as juju itself
[20:58]  * natefinch cries in the corner.
[20:58] <katco> balloons: go get 'em :)
[20:59] <perrito666> well its also writen in python, so if you are going for consistency you have a long road ahead
[20:59] <natefinch> I was going to mention that...  sigh.
[20:59] <katco> i don't know that it being written in python is a huge deal
[20:59] <niedbalski> dimitern, https://pastebin.canonical.com/162703/ latest run
[21:00] <katco> and this is coming from someone who doesn't know python
[21:00] <balloons> yea actually I think you should be glad they aren't written in go
[21:00] <natefinch> katco: it's a barrier of entry for our go developers
[21:00] <balloons> natefinch, looks like loggo depends is out of date; I'm going to update it too. build failed
[21:00] <natefinch> balloons: cool
[21:01] <natefinch> I'm outta here for a while.  back in 3-4 hours.
[21:01] <balloons> thanks natefinch-afk for the review
[21:02] <katco> natefinch: not a really high one imo. i'd sooner complain about the custom infrastructure built up for interacting with juju
[21:02] <balloons> mmm.. indeed katco
[21:08] <mup> Bug #1611514 opened: "local" charm schema should allow deploying previously deployed local charms <feature> <juju-core:New for cox-katherine-e> <https://launchpad.net/bugs/1611514>
[21:11] <alexisb> thumper, ping
[21:11] <thumper> yus?
[21:11] <alexisb> http://reviews.vapour.ws/r/5386/
[21:11] <alexisb> can you please review this for perrito666 ^^
[21:11] <thumper> k
[21:17] <thumper> perrito666: conflicts against master
[21:17] <thumper> perrito666: also, bigger than 500 lines
[21:17] <thumper> :)
[21:18] <perrito666> aghh I am never again changing code that is called all over the place
[21:21] <perrito666> thumper: fixing the merge, could you make an exception with the lines ?
[21:22]  * thumper is reviewing
[21:22] <thumper> ssh
[21:22] <perrito666> I am prepared to pay in alcoholic beverages
[21:29] <perrito666> rick_h_: take a look at the branch,I fixed the issue
[21:29] <rick_h_> perrito666: ty, will have to wait until the morning
[21:29] <rick_h_> perrito666: but will do
[21:58] <alexisb> menn0-exercise, liking dev snap, the setup script is mighty handy
[22:18] <menn0> alexisb: yeah, it took a little while to figure out how to make that work (it's easy once you know how however)
[22:57] <perrito666> we should rename ship-it to amen so I can ask "can I get an amen" when asket for a shipit
[23:02] <redir> perrito666: halleluja!
[23:07] <alexisb> perrito666, did thumper give you a ship it?
[23:07] <alexisb> if so I will second redir's halleluja
[23:08] <thumper> almost
[23:09] <thumper> have now with one potential change
[23:09] <alexisb> halleluja!
[23:11] <perrito666> alexisb: implementing change after standup and then its ok to go
[23:11] <perrito666> and we can use $$halleluja$$ for the merge
[23:11] <alexisb> :)
[23:34] <redir> axw: got a minute?
[23:34] <menn0> alexisb, thumper: tech board agenda now has your items for today's meeting
[23:34] <alexisb> thank you menn0
[23:34] <axw> redir: did you want to talk to me?
[23:35] <redir> yeah just a couple questions
[23:35] <redir> axw: ^
[23:35] <axw> redir: yup, hangout or here?
[23:36] <redir> either but HO is prolly faster
[23:37] <redir> unless you remember where we put region config in globalSettingsC and have an idea where I would get the region and cloud from in modelConfigSources
[23:37] <redir> axw: ^
[23:38] <perrito666> If the standup would have been its usual lenght i could have finished my recipe :p
[23:39] <perrito666> Standup + cookingshow