[00:36] <axw> thumper babbageclunk: so, I'm a bit nervous about having gaps in tagging that would be introduced by Unset/Set. did you discuss my suggestion of just sending the version of the controller to UpdateController?
[00:36] <axw> thumper babbageclunk: only disadvantage I see to that is that the provider needs to remember all the old methods of tagging. but I think that's preferable to possible resource leakage
[00:39] <axw> thumper babbageclunk: finally (the clincher for me), some resources are tracked with things other than tags. e.g. OpenStack, security groups are tracked by formatting their names in a specific way. AFAIK that's immutable. so UpdateController would need to recreate the security group
[00:42] <babbageclunk> axw, thumper: so we'd need to create a new security group, move everything over to it and then get rid of the old one?
[00:42] <axw> babbageclunk: yep
[00:42] <axw> (unless you can rename things in openstack?)
[00:42] <babbageclunk> axw, thumper: yeah, that does seem like a pretty big problem for the unset/set approach
[00:43] <menn0> thumper: can you have another quick look at this one pls: https://github.com/juju/juju/pull/6839
[00:43] <axw> babbageclunk: ah you can rename instances, so maybe security groups too. but I'm a bit wary of expecting we can do that in all cases for every provider
[00:45] <axw> thumper: ready to chat whenever'
[00:45] <babbageclunk> axw: Well really anything where we tag by naming doesn't work in the unset/set case anyway.
[00:49] <babbageclunk> axw: thumper made the point that it's possible we'd change how resources were tagged with the model as well - that makes update with a version harder too, doesn't it? Environ.AllInstances might not find the migrated machines because it would be looking for the model tag by the wrong key (or whatever).
[00:50] <babbageclunk> I need to go for a run to think about this, then I'll definitely need to pick people's brains some more.
[00:50] <babbageclunk> :(
[00:54] <axw> babbageclunk: I don't see how that's a problem. since you can only migrate to a controller >= version (right?), then as long as we keep a history in the provider of how tagging was done, we can determine how to identify the instances based on the source controller's version
[00:56] <babbageclunk> axw: Yeah, I think you're right - and I guess we only need to introduce that handling inside the provider once  we actually need to make that change in a provider.
[00:56] <axw> babbageclunk: right
[00:56] <babbageclunk> axw: ok, sold. Thanks! thumper I'm doing this ^
[00:57] <axw> babbageclunk: cool, no worries
[01:21] <thumper> axw, babbageclunk: re openstack security groups, are they created in the account, or controller?
[01:21] <thumper> if it is the account, then nothing changes
[01:21] <thumper> axw: still good to chat?
[01:22] <axw> thumper: not sure what you mean by "in the account or the controller". they're created by the controller for a model, and should be destroyed when the model or controller is destroyed
[01:22] <axw> thumper: yep
[01:22] <axw> thumper: https://hangouts.google.com/hangouts/_/canonical.com/andrew-tim?authuser=1
[01:22] <thumper> axw: the model's machines aren't moving, so any security group created for a model and the model's credentials are still equally valid on the target controller
[01:23] <axw> thumper: yeah, it's about ownership tho. what to do if you kill-controller on the source/target controller
[01:34] <thumper> menn0: got a few minutes?
[01:34] <menn0> thumper: yep
[01:34] <thumper> 1:1
[01:56] <redir> fu... our docs say you have to create the pflash drive, but it looks like xenial's version of libvirt-bin now supports creating one by default:|
[01:57] <thumper> redir: bugger... :-|
[01:57] <redir> I guess that would be good, less (code) is more
[01:57] <redir> not a huge deal just painful to live test
[02:38] <menn0> babbageclunk: could you take a quick look at this one please? https://github.com/juju/juju/pull/6839
[02:39] <babbageclunk> menn0: Sure
[02:39] <menn0> babbageclunk: thanks
[02:46] <rick_h> wallyworld: ping around?
[02:46] <wallyworld> hey
[02:46] <rick_h> wallyworld: hey, have a sec to give me a hand?
[02:47] <wallyworld> sure
[02:47] <wallyworld> a hand what?
[02:47] <rick_h> wallyworld: https://pastebin.canonical.com/176845/ trying to redo my maas setup here and bootstrap is stuck here for 20+min
[02:47] <rick_h> wallyworld: can't ssh to the machine with ubuntu@IP, but log seems to say it did work?
[02:48] <rick_h> wallyworld: so can't get at the log and I'm confused on wtf
[02:49] <wallyworld> rick_h: looks like the controller can't see streams.canonical.com
[02:50] <rick_h> hmmm, k the GUI fetch didn't error so assumed it worked
[02:51]  * rick_h figures out to test that out
[02:51] <wallyworld> i'm just guessing on the "fetching agent" bit
[02:51] <wallyworld> maybe that did succeed
[02:51] <wallyworld> but after that it also needs to connect to the archives
[02:51] <wallyworld> to get mongo
[02:51] <wallyworld> IIANM
[02:52] <rick_h> IIANM?
[02:52] <wallyworld> if i am not mistaken
[02:52] <wallyworld> maybe turning on debug would help. the info logging is a bit terse
[02:52] <rick_h> yea, that's with --debug on bootstsrap
[02:52] <wallyworld> doh, of course
[02:53] <wallyworld> i *think* the fetching juju agent messages correlates with get the tools binaries and get mongo
[02:54] <wallyworld> the last step in getting jujud running
[02:54]  * rick_h tries to deploy a maas node w/o Juju and see if I can ssh to it and check the connectivity
[02:55] <wallyworld> that's a reasonable next step
[02:55] <wallyworld> each time there's been similar issues, from memory, it's been network/routing related
[02:55] <rick_h> k, thanks
[03:01] <rick_h> wallyworld: ty, looks like I needed an iptables rules to enable nat on the maas machine.
[03:01] <rick_h> wallyworld: at least ping now works so hopefully it'll bootstrap now successuflly
[03:01] <wallyworld> ah, right
[03:01] <wallyworld> the symptoms seemed to fit not having outbound traffic
[03:01] <rick_h> yea
[03:02] <rick_h> at least it works now on a manually started maas node, so checking to see if the juju node works now
[03:02]  * wallyworld crosses fingers
[03:03]  * rick_h wishes there was a clear success/fail message on these things (like fetching the gui, agents, etc)
[03:04] <wallyworld> maybe it hadn't timed out yet, not sure
[03:04] <wallyworld> but it does need to surface the connectivity issue better
[03:05] <babbageclunk> menn0: LGTM
[03:05] <menn0> babbageclunk: tyvm
[03:05] <rick_h> wallyworld: "Installing Juju machine agent"
[03:06] <rick_h> seems like we're in business woot
[03:07] <wallyworld> yay
[03:08] <rick_h> ty wallyworld
[03:08] <wallyworld> didn't do much :-)
[03:10] <rick_h> well poked me to double check that egress. I thought I had it worked but I had rebooted and guess the settings didn't keep
[03:17] <redir> this one should be an easy review... https://github.com/juju/juju/pull/6860/files
[03:17]  * redir eods
[03:17] <redir> see you tomorrow juju-dev
[03:49] <menn0> axw: could you pls take a look at this one? https://github.com/juju/juju/pull/6859
[03:49] <axw> menn0: looking
[03:50] <menn0> axw: thanks
[03:53] <axw> menn0: done
[03:53] <menn0> axw: thanks
[06:58] <jam> balloons: it looks like !!build!! is blocked again
[08:04] <mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[08:04] <mup> Bug #1658549 changed: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[08:04] <mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[08:04] <mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[08:04] <mup> Bug #1658549 changed: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[08:05] <mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[09:56] <perrito666> Morning
[10:37] <mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[10:38] <mup> Bug #1658549 changed: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[10:38] <mup> Bug #1658549 opened: Security issue: jujud is not owned by a user on the system <juju-core:New> <https://launchpad.net/bugs/1658549>
[11:07] <rogpeppe1> if I want environment isolation and a fake home, which testing suite should I be using?
[11:08] <rogpeppe1> perrito666: any idea?
[11:27] <perrito666> rogpeppe1: sorry I dont know which one has a home
[11:28] <rogpeppe1> perrito666: i think i worked out the answer: use IsolationSuite and call MakeFakeHome
[11:28] <rogpeppe1> perrito666: FakeHomeSuite doesn't provide env isolation unfortunately
[11:29]  * rogpeppe1 wishes that all the fixture suites were orthogonal to one another
[11:46] <rogpeppe1> here's a feature-branch PR to add support for preferred authentication domains to the juju client:  https://github.com/juju/juju/pull/6862
[12:56] <rogpeppe1> jam: i've replied to your comments on https://github.com/juju/juju/pull/6862
[12:57] <jam> rogpeppe1: is there a discussion response or just a code response?
[12:57] <rogpeppe1> jam: both
[13:29] <hoenir> Morning.
[13:29] <perrito666> hoenir: morning
[15:27] <ahasenack> hi juju devs, do you have commit rights on go-yaml?
[15:27] <ahasenack> we have an issue with it, and apparently there is a PR for it too
[15:28] <ahasenack> specifically, https://github.com/go-yaml/yaml/pull/223
[15:28] <ahasenack> which affects juju
[15:28] <ahasenack> issue is https://github.com/go-yaml/yaml/issues/157
[15:29] <rick_h> ahasenack: no, I think we have to work to get the PR looked at and landed
[15:33] <ahasenack> rick_h: you don't have commit rights, you mean? Your team
[15:34] <rick_h> ahasenack: I don't think anyone in juju land does other than gustavo
[15:34] <ahasenack> ok, thanks
[15:34] <rick_h> ahasenack: so to push this you'll need to reach out to him
[15:53] <niemeyer> ahasenack, rick_h: go-yaml is long due some maintenance
[15:57] <niemeyer> ahasenack: Not sure if that fix is the right one..
[15:58] <ahasenack> niemeyer: tbh I haven't looked at the PR itself
[15:58] <ahasenack> I was going to point out just the issue and someone whispered to me there was a PR for it already :)
[16:36] <redir> perrito666: https://github.com/juju/juju/pull/6860 PTAL if you have a minute
[17:04] <alexisb> redir, ping
[17:10] <perrito666> redir: sure
[17:10] <redir> alexisb: pong
[17:10] <redir> perrito666: tx
[17:10] <alexisb> perrito666, redir I need someone to pick up the OSX regression
[17:11] <redir> issue?
[17:11] <alexisb> https://bugs.launchpad.net/juju/+bug/1659016
[17:11] <alexisb> ^^^
[17:11] <alexisb> do either of you have an OSX workstation?
[17:11] <perrito666> alexisb: nope
[17:12] <redir> alexisb: https://bugs.launchpad.net/juju/+bug/1659016 this one?
[17:12] <redir> alexisb: I have an ancient macbook pro
[17:12] <alexisb> redir, yes that one
[17:12] <redir> might take a while to bring it into this decade
[17:30] <redir> brb
[17:36] <redir> back
[17:42] <natefinch> heh
[17:42] <natefinch> oops wrong window
[17:42] <redir> heh
[17:44] <perrito666> a bless on develop \o/
[17:44] <alexisb> \o/
[17:45] <perrito666> followed by a curse, this parallel testing thing doesnt give much room for happyness
[17:45] <alexisb> perrito666, I know you plyed I big part in that bless, so thank you
[17:45] <perrito666> alexisb: I havent submitted ode in the past week or so so I dont think so :p
[17:45] <perrito666> I have a couple of PRs dancing around develop
[17:46] <alexisb> perrito666, this was way more than a weeks worth of work
[17:46] <alexisb> dont sell yourself short
[17:47] <perrito666> alexisb: I never do, I do rent myself short :p
[17:47]  * perrito666 is a leasing person
[17:47] <rick_h> lol, yea way more than a week
[17:52] <perrito666> I wish that bless would have come at an hour where I wouldnt feel guilty about opening a beer
[17:52] <perrito666> :p
[17:52] <rick_h> perrito666: lol
[17:52] <rick_h> perrito666: there is no guilt
[17:53] <perrito666> rick_h: its 3pm and I have to be up and running until standup at 20:45 so there is a little guilt involved
[17:53] <rick_h> perrito666: 3pm for you?
[17:53] <perrito666> yup
[17:53] <rick_h> I don't know why but I thought you were EST or Central
[17:54] <perrito666> rick_h: GMT-3, whatever that means in non standard TZs :p
[17:54] <alexisb> perrito666, I expect to see a celebratory beer at standup tonight
[17:54] <perrito666> alexisb: well if you force my hand...
[17:54] <rick_h> heh, I forget how far over mexico kicks east
[17:55] <alexisb> perrito666, :)
[17:55]  * rick_h thinks of things more in a straight line which is false brokenness
[17:55] <perrito666> rick_h: yes, you have mercator to blame for that
[17:55] <rick_h> redir: ping
[17:55] <perrito666> also my country is in the wrong timezone for dumb political reasons :p
[17:56] <redir> rick_h: pong
[17:56] <rick_h> redir: what's your LP username?
[17:56] <redir> reedobrien
[17:56] <redir> i think
[17:56] <rick_h> redir: k, that worked ty
[17:56] <redir> yeah
[17:56] <rick_h> redir: heads up that guimaas is rebuilt on newer maas, a few nodes aren't running atm
[17:57] <redir> I used to also have redir but had it merged
[17:57] <rick_h> redir: but have it working with 5 of hte 8 nodes with juju 2.1beta4
[17:57] <rick_h> redir: so if you ever need to tinker again it should work like before, same machine/url/etc
[17:57]  * redir does peanutbutter and jelly banana dance
[17:57] <redir> thanks rick_h
[17:58] <rick_h> redir: I've got a new nuc3 coming tomorrow and a video cable to get 7 & 8 back going so by end of tomorrow should have all 8 working fine with modern tool stack
[17:59] <redir> awesome
[17:59] <redir> rick_h: in the mean time I've set up vmaas on my workstation and on an old box, which works great except it has only one nic, so I can only test it from itself.
[17:59] <redir> well without a lot of twiddling
[18:02] <rick_h> redir: yea, always good to have around, but also if we need real hardware there's some sitting free-ish
[18:02] <redir> rick_h: yes, tyvm for letting me know it is back.
[18:02] <redir> it is faster than my vm
[18:02] <redir> :)
[19:28] <redir> lunch
[19:28] <perrito666> redir: k
[20:15] <thumper> morning
[20:24] <redir> thumper: o/
[20:27] <wallyworld> thumper: quick! merge develop to staging, we have a bless!
[20:27] <thumper> wallyworld: that is done automatically
[20:28] <wallyworld> well, i should check then to see if it has happened
[20:29] <wallyworld> looks like it has
[20:48] <redir> thumper: wallyworld easy review https://github.com/juju/juju/pull/6863
[20:49] <thumper> redir: shipit
[20:49] <wallyworld> lgtm
[20:49] <redir> tx
[20:50] <redir> someone needs to enable their pre-push hook
[20:58] <redir> staging fails pre-push too
[20:58] <redir> but for unreachable code
[21:10] <redir> looks like a fine merge problem
[21:11] <natefinch> Anyone understand this merge error?  http://juju-ci.vapour.ws:8080/job/github-merge-juju/10041/artifact/artifacts/windows-err.log
[21:11] <natefinch> sinzui, abentley ^
[21:14] <sinzui> natefinch: remember to ignore the -err., use the -out http://juju-ci.vapour.ws:8080/job/github-merge-juju/10041/artifact/artifacts/windows-out.log
[21:15] <natefinch> sinzui: sorry, the console output said to see the -err :)
[21:15] <natefinch> sinzui: so, looks like mongo just bit the dust.  retry I suppose?
[21:15] <sinzui> natefinch: yeah, that will be fixed one day
[21:15] <sinzui> natefinch: yes, I agree to retry.
[21:17] <abentley> redir: It failed: http://juju-ci.vapour.ws/job/github-merge-juju/10100/console
[21:20] <redir> abentley: ?
[21:22] <abentley> redir: Your fix/prepush-failures
[21:24] <redir> oh, yes, but because mongo not because the prepush hook failed
[21:25] <redir> I understand that if someone fails to setup their pre-push hook or uses the --no-verify flag that things can make it into a PR,
[21:25] <redir> but I would expect that the CI pipeline would run the verify script before bothering to run tests...
[21:26] <abentley> redir: Well, you know what happens when you assume...
[21:27] <redir> abentley: I was expecting not assuming:)
[21:27] <redir> poor expectations on my part
[21:27] <abentley> redir: Maybe the verify script can become part of make test?
[21:30] <redir> abentley: seems reasonable that there should be some target that runs verify and and builds all together
[21:33] <redir> abentley: looks like sinzui found waldo
[21:37] <redir> this is the staging thing I see when updating my fork https://github.com/juju/juju/compare/staging...reedobrien:fix/prepush-on-staging?expand=1
[21:42] <sinzui> redir: the fix is in place
[21:56] <redir> sinzui: thanks!
[22:44] <babbageclunk> thumper: the azure provider tags at least 8 different kinds of things with the controller id.
[22:44] <thumper> hazaah
[22:44] <babbageclunk> thumper: the destroy controller handles it by just destroying the resource group to kill everything in one fell swoop.
[22:45] <thumper> sigh
[22:45] <babbageclunk> thumper: do you know whether there's something similar to update lots of things?
[22:45] <babbageclunk> thumper: all at once?
[22:45] <thumper> no, but axw might
[22:45] <babbageclunk> axw: ^^
[22:47] <babbageclunk> oh, probably a bit early for him yet
[22:47] <perrito666> babbageclunk: sounds about the time he gets up, our previous standup used to be in 15 mins from now
[23:14] <axw> babbageclunk: no, I don't think there is
[23:16] <axw> babbageclunk: the tags are for informational purposes as well as, FYI (i.e. so people navigating through the portal can tie things back to juju land)
[23:17] <babbageclunk> axw: stink, I didn't think so. What I'm doing at the moment is listing all of the things in the resource group and updating the tag for each, which requires dispatching on the GenericResource.Type
[23:18] <babbageclunk> axw: That way at least we'll be able to log an error if a new resource type gets tagged but the code to update it isn't.
[23:19] <axw> babbageclunk: you can use https://github.com/Azure/azure-sdk-for-go/blob/master/arm/resources/resources/resources.go#L191 to update arbitrary resources
[23:19] <axw> babbageclunk: is that what you're doing?
[23:20] <babbageclunk> axw: Ooh, yes, that's what I was hoping to find - thanks! Removes the need to dispatch by the type.
[23:20] <axw> babbageclunk: I'm not sure how it handles the non-common resource properties
[23:21] <axw> babbageclunk: the azure API supports PATCH, but I don't think it's exposed by the SDK at all
[23:22] <babbageclunk> axw: It looks like you pass in a generic resource - I hope it'd only update the things that are on there and leaves everything else alone? Otherwise how could you use it?
[23:22] <axw> babbageclunk: I hope so to. dunno otherwise
[23:23] <axw> babbageclunk: I can imagine how it'd work when making the HTTP request: you would just add the properties free-form
[23:23] <axw> babbageclunk: but not using the Go SDK
[23:23] <axw> babbageclunk: oh! never mind, GenericResource has a "properties" attribute
[23:23] <axw> babbageclunk: so if you just preserve that, you're good
[23:24] <babbageclunk> axw, awesome. Oh man, that documentation is terrible. "CreateOrUpdate creates a resource." Is that all it does?
[23:24] <axw> babbageclunk: and now that I think of it, you *could* do it all in one shot using a template deployment. I think you should see how it goes without that first though
[23:25] <babbageclunk> axw
[23:25] <babbageclunk> oops
[23:25] <axw> babbageclunk: heh :/  it's auto-generated from swagger
[23:25] <babbageclunk> axw: ok, will do - I don't think it'll be too bad like this.