[00:03] wallyworld: tyvm for the review. You're right about the logging/error handling. I'll improve it. [00:03] menn0: bp, i got distracted and didn't let you know i had done ir, sorry [00:03] *np [00:04] wallyworld: np, I was busy with the next thing anyway [01:19] wallyworld: are you working on any of those bugs? just don't want to double up [01:20] axw: i had good intentions but haven't started yet [01:20] are they legit issues? [01:20] wallyworld: nps. yes, I think so. [01:20] ok [01:28] wallyworld: actually, create-model would not work on master for manual either [01:28] wallyworld: but meh, I can fix it on admin-controller-model with one line change [01:28] axw: that was the point i tried to make and they were not wanting to ack that [01:28] ok that would be good [01:32] axw: there is also a bug for create-model on azure bug 1558769 [01:32] Bug #1558769: unable to create-model in azure [01:33] axw: is it related and will b fixed by the one line change above? :D [01:33] anastasiamac: I saw, thank you. I think wallyworld already fixed it, but I'll double check [01:33] \o/ [01:33] anastasiamac: nah, the manual one is specific to manual [01:33] axw: thnx :D [01:43] wallyworld: sorry, actually, the problem is worse on admin-controller-model. you can't even bootstrap, because we try to create a hosted model automatically [01:44] axw: understood. but root cause comes form master [01:44] the test would have seem it if they had existed [01:44] seen [01:44] wallyworld: you can't create-model on master, you can't even bootstrap on this branch. yes, we would have caught it earlier if we tested create-model in all substrates [01:46] axw: indeed. but i want to snure the bug is correctly targetted - i am tired of pushback to fix issues in feature branches which blocks work where the root cause comes from master [01:47] oh yeah i found https://tracker.debian.org/pkg/golang-golang-x-tools yesterday i think [01:48] whoops [02:38] wallyworld: just a few more small things on the PR [02:38] ok [02:39] axw: the maas agent-name thing is because we grab the controller model and end up calling PrepareForCreateEnvironment() which doesn't like the agent name being there [02:39] i need to rework it a bit [02:39] wallyworld: okey dokey [02:40] wallyworld: I thought we didn't copy across attrs if they were restricted? [02:40] or ... non restricted [02:41] axw: that could be - but it seems we do now [02:41] likely a bug from me supporting new create model [02:44] wallyworld: possibly also in the code I added to agent/agentbootstrap [02:44] yeah, it won't taje much to fix i don't think [03:13] axw: http://reviews.vapour.ws/r/4222/ [03:14] wallyworld: looking [03:14] wallyworld: I'm looking at the other bug btw [03:14] ta [03:15] wallyworld: I think your change means the credentials won't be copied across [03:15] wallyworld: and things from --config [03:16] isn't the stuff in --config in HostedModelConfig arg? [03:16] I don't think so, but I'll check [03:16] i may need to do credentials though [03:16] wallyworld: nope, it's not [03:16] damn, what is in that arg? [03:17] wallyworld: model name, uuid [03:17] wallyworld: it's set in cmd/juju/commands/bootstrap.go [03:17] hmmm, ok. may need to add the --config to that arg then maybe [03:17] wallyworld: and credentials... [03:17] yup [03:18] wallyworld: problem is how to separate credentials from things like maas-agent-name [03:18] on bootstrap we don't need to worry right? [03:19] and it's taken care of in create model that is used by create-model command [03:20] maas-aganr-name is add in PrepareForCreateEnvironment. on bootstrap, it won't be there in passed in config, or will error if it is. in create-model we create a skeleton config [03:20] wallyworld: how are you going to get the credential attrs? [03:21] need to look into that [03:21] wallyworld: rhetorical question. the answer is (currently) to call EnvironProvider.BootstrapConfig [03:22] wallyworld: that adds the credential attrs to the config given a cloud.Credential [03:22] ok [03:22] wallyworld: ... but it also adds maas-agent-name [03:22] ah damn ok [03:23] wallyworld: I think what we could is this: call RestrictedConfig, make sure we include those, and delete anything that's not passed in via --config [03:23] except for credentials maybe [03:24] wallyworld: I was thinking they're restricted... but yeah, they're not [03:24] le sigh [03:24] sigh indeed, i found out the same thing [03:36] axw: it's all good. all i need to do is include --config in the hosted model config arg. (c ModelConfigCreator) NewModelConfig does the right thing with credentials [03:38] wallyworld: ah, I see [03:39] wallyworld: uhm, although, it's kinda wrong :/ [03:39] wallyworld: those credential attributes are not supposed to map 1:1 [03:40] they sort of do atm in common usage [03:40] but they don't have to that's true [03:40] wallyworld: they do atm, but it's an abstraction breakage, and means we're buggered if we change something [03:40] yep [03:41] wallyworld: I think we may need something on EnvironProvider to identify which things to carry across [03:41] was a quick win, we need to fix the tech deby [03:41] solves the common case for admins creating new models [03:41] wallyworld: or we update PrepareForCreateEnvironment to take a Credential too [03:41] that might be better [03:42] wallyworld: or ... maybe separate BootstrapConfig even further to convert Credential to attrs [03:42] or that [03:43] wallyworld: main problem is that some cloud.Credentials only make sense at the client (e.g. ones that refer to files) [03:43] indeed [03:43] tho we should convert them [03:43] we need to think it through [03:44] wallyworld: https://bugs.launchpad.net/juju-core/+bug/1558803/comments/1 [03:44] Bug #1558803: Manual deploy on ppc64el wants wrong package and agents [03:44] the file values are parsed server side, need to do that client side [03:45] did you find an issue? [03:45] wallyworld: a pretty minor one - see comment [03:50] wallyworld: I've regargeted and lowered importance [03:50] axw: sorry, got pinged still to look [03:51] axw: it's not even relevant to admin controller branch is it [03:51] wallyworld: no, I've removed that [03:51] ah cool [03:51] wallyworld: and set the medium, and removed regression tag [03:51] * wallyworld hits refresh [03:52] axw or wallyworld: http://reviews.vapour.ws/r/4223/ please [03:52] menn0: i can look in a bit, just wip atm [03:53] wallyworld: thanks [03:53] in that case, I'll go get lunch [03:54] wallyworld: quick ping. Do you remember how to tell a test what tools to use? It looks like EC2 tests were relying on side-effects from some other test to set up tools [03:58] jam: that has recently changed, we were relying on cloud (provider storage) which is gone. rthere are helpers to uplod tools to state [03:58] i can look up the methods [03:59] wallyworld: so with my recent change to fix SetUp stuff, provider/ec2 is not finding tools. It looks like the FakeVersionNumber is 1.99.0 but it only sees 2.0-beta3 tools in the faked out location. [03:59] looks like something is faking out tools before we fake out the version number [03:59] trying to sort out where that may be [04:00] jam: i'll take a look, i am not deeply familiar as i didn't see the final code [04:00] environs/jujutest/livetests.go SetUpTest looks like it is doing the right thing (calls FakeVersionNumber before it calls UploadFakeTools) [04:00] wallyworld: k, I have the feeling this is old semi-rotten code [04:00] axw: pushed new changes to that maas fix [04:00] ah, maybe its me [04:00] I moved one of the PatchValue calls [04:01] ah [04:01] because it was happening before SetUp [04:01] joy [04:01] which is unsafe, because we haven't called IsolationSuite.SetUp yet [04:01] a lot of the tests rely on patching version [04:01] Bug #1558901 opened: TestAddLocalCharmSuccess read has been closed [04:01] so we haven't told it what test we're about to run. I think I can just patch for the lifetime of the Suite and we'll be ok [04:02] i think that souuds ok [04:02] nope... :( [04:03] wallyworld: we had a lot of tests that were doing "SetUpSuite() { base.SetUpTest() }" [04:03] those were interesting. [04:03] wot? [04:03] wow [04:03] i hope i didn't write any [04:04] wallyworld: FakeXDGHomeDir was one of them [04:04] apparently broken since 2014 according to blame. [04:04] huh. thats actually old code renamed [04:04] wallyworld: yeah, a bunch of our base infrastructure was actually wrong. [04:04] at yeast it will be right now :-) [04:05] wallyworld: BaseSuite.SetUpSuite() was calling PatchValue(utils.OutsideAccess) [04:05] lol yeast. i can' type [04:05] which would be reset on the first test that ran [04:05] except we leaked it somewhere early [04:05] so it was always false [04:05] oh dear [04:05] wallyworld: axw: before I can bootstrap master tip on my openstck, I need to add-cloud and i guess add-credential [04:05] so it was Correct, but for the wrong reasons. [04:05] anyway, provider/ec2 is the last bastion of failing tests, so I'm close. [04:06] where do I put auth-url? in my-cloud.yaml or my-credential.yaml? [04:06] anastasiamac: that's a credential attribute [04:06] k [04:06] anastasiamac: if you have a novarc and master, it will auto detect [04:06] wallyworld: auth-url isn't that what URL to hit which is a cloud attribute? [04:06] (souds like a per-region attribute) [04:07] jam: we have endpoint url which i think is different but i haven't look specifically at openstack for a bit so could be misremembering [04:08] jam: but i think you may be right [04:08] wallyworld: I have novarc but do I need to have it somehwere in a special place? [04:09] ~/.novarc [04:09] :D [04:09] so in this case, I do not need to add-credentials? [04:10] Can I get a review? https://github.com/juju/juju/pull/4783 [04:10] (this is for the restore failure on master) [04:11] jam: yes, i looked at the code and confirmed you are right [04:11] cherylj: looking [04:12] thanks, wallyworld. I had to shuffle things around to be able to mock out stuff for testing [04:12] turns out that there really aren't tests for restore :( [04:15] wallyworld: ... provider/ec2/ LocalLiveTests embeds LiveTests but doesn't call LiveTests.SetUpTest() [04:15] win [04:16] anastasiamac: if you have ~/.novarc, or if you just source your novarc in your shell, you can do "juju bootstrap openstack" [04:16] cherylj: only ci tests it seems [04:16] yeah [04:16] wallyworld: eating lunch, will look again soon [04:17] np, thanks [04:17] cherylj: why is there a "fakeEnviron" in live code? [04:18] jam: for mocking out the environ in the test. We only set it in tests [04:18] wallyworld: actually I can review that while eating :) [04:18] LGTM [04:18] axw: i have both ~/.novarc and sourced it in the shell, but when I "juju bootstrap openstack" I get "ERROR missing auth-url not valid [04:18] " [04:19] anastasiamac: that would suggest you're missing OS_AUTH_URL from the file [04:19] anastasiamac: where did you get the novarc file from? [04:19] cherylj: have you done manual testing of an environment that really hasn't ever existed? [04:19] axw: it would... but I don't [04:19] it's in the file [04:20] that was supplied [04:20] oh, that's odd [04:20] anastasiamac: does it have "export OS_AUTH_URL" in it per chance? [04:20] anastasiamac: wallyworld fixed a bug yesterday to do with that [04:20] :q [04:20] oops [04:21] axw: yes, everything in the file has an export... and m running from master tip as of 1hr ago :D [04:22] cherylj: i've made a request to alter things slightly to better set up for testing [04:22] anastasiamac: ok then, I dunno. sounds like a bug [04:23] axw: anastasiamac: it's a bug that's been there for a while [04:23] anastasiamac: try moving the file away from ~/.novarc and sourcing it, and see if that makes a difference [04:23] DetectRegions() only calls CredentialsFromEnv() [04:23] cherylj: reviewed. [04:23] not the new DetectCredentials() code [04:23] wallyworld: anastasiamac said she sourced it as well tho [04:24] don't believe it :-) [04:24] since that should have worked [04:24] wallyworld: right.... [04:24] unless there's a 3rd problem [04:25] i think openstack has been used with beta2 which is where sourcing the novarc would have been required [04:25] wallyworld: m so calling u for that! [04:35] jam: any idea why SetInstanceStatus is so slow? it's all local, so should be super fast :( [04:35] axw: I don't know specifically, but IIRC the spec wanted to rate limit charms calling it [04:35] given how accurate it is to 1.0s [04:36] I think its just a time.Sleep somewhere. [04:37] jam: ok, I'll dig. we should probably use the ratelimit package with a token bucket [04:38] well, it could be that, too. I just get the feeling it is explicitly limiting itself, which interacts poorly with fast downloads and a set number of events [04:54] wallyworld: and now the test suite "passes" but a test is taking 60s, investigation looks like it is accessing my EC2 credentials and launching a real instance... [04:55] (ec2.localLiveSuite) not so local [04:55] jam: localLive tests, from memory, are run both local and live [04:55] that distinction goes waaaaaay back [04:56] i wish we dropped live tests, we don't need them now [04:56] i guess they were added for juju 0.1 before we had CI [05:06] wallyworld: Can you take another look? http://reviews.vapour.ws/r/4224/ [05:07] wallyworld: Also, I think that the backup / restore commands should be controller commands, not model commands [05:07] but that's outside the scope of this PR [05:09] cherylj: looking [05:13] cherylj: thanks for that fix. ideally we'd not have the func as an arg to NewRestoreCommand. There'd be a NewRestoreCommandForTest() in export_text. see detectcredentials.go in juju/cmd/juju/cloud and also the NewDetectCredentialsCommandForTest in export_test.go [05:15] wallyworld: ok, I see. Give me a few and I'll update it [05:22] wallyworld: okay, maybe 3rd time's the charm [05:22] :-) [05:22] looking [05:26] cherylj: very small issue, lgtm [05:26] ha, sorry it's taking so long. I'm so very tired [05:29] wallyworld: hm, looks like it isn't talking to EC2, just that we have a 60s timeout waiting for a security group that is being used to be released [05:30] maybe a setup was supposed to be changing that timeout [05:30] (just ran in offline mode and had the same result) [05:30] could be [05:30] we have so many tests with clocks to fix [05:30] cherylj: indeed, you should not be at work [05:31] and with that... bedtime [05:31] see ya [05:32] axw: also, i did end up needing to remove all the CompatSalt stuff yesterday, as bootstrap still relied on it and it needed to be killed [05:32] wallyworld: cool [05:32] wallyworld: does it need re-reviewing? [05:32] axw: nah, was 99% cut [05:32] i tested live [05:33] wallyworld: sounds good [05:33] i tested i could log into mongo console from machine 0 as well as a deployment [05:54] axw: awesome, the interactive add credential branch doesn't appear to compile on go 1.2 [05:55] wallyworld: ? :/ [05:55] compiles and runs locally [05:55] http://juju-ci.vapour.ws:8080/job/github-merge-juju/6912/console [05:55] i'll have to get go 1.2 to repro [05:56] unless i'm being dumb [05:57] wallyworld: nothing obvious to me [05:57] sigh [05:59] wallyworld: k. there is definitely a problem with 60s to destroy, but it exists in Master as well, so my code is better than previous [06:00] jam: i've found and fixed a few bad tests lately too. our code has many :-( [06:08] Bug #1558924 opened: provider/ec2 localLiveSuite.TestGlobalPorts localLiveSuite.TestStartStop takes 60s [07:25] axw: am merging master into admin-controller-model... soooo many conflicts \o/ [07:25] will have to finish after soccer [07:25] wallyworld: thank you [07:26] wallyworld: my current status is refactoring modelcmd/juju code in between bouts of sneezing [07:26] need to refactor so I can pass in alternative credentials [07:26] ok [07:27] i still need to install go 1.2 and resolve that other issue also [08:07] axw: is this ur juju allergies making u sneeze? :P [08:08] anastasiamac: naw, got a sleep-deprivation induced cold I think [08:10] axw: :( sounds awful [09:22] frobware: ping [09:22] voidspace: pong [09:22] voidspace: I pushed your branch [09:22] frobware: yeah, I see it [09:23] frobware: no CI run yet though [09:23] voidspace: as upstream/drop-maas-1.8-support-from-juju2 [09:23] voidspace: nope... [09:23] frobware: I'm writing up a high-level sketch of the maas 2 work [09:23] voidspace: but then again our multi-nic branch only only started running this morning [09:24] voidspace: thanks! \o/ [09:24] frobware: the gomaasapi test server needs not far short of a full rewrite, so that's not a small chunk of work [09:24] :-/ [09:24] frobware: I think a new one, with some shared code, will be cleaner than a single implementation or an interface based approach [09:26] voidspace: given the time remaining is that at all feasible? Or even desirable? [09:28] frobware: I think it's the only reasonable approach [09:28] frobware: the TestServer has about thirty attributes, most of which won't be used for 2.0 [09:29] frobware: so a monster with 60 fields and branches in every method will be impossible to work on [09:29] frobware: that's what I think [09:29] frobware: a lot of the code can be shared (all the endpoint stuff) [09:29] frobware: but maybe there's a middle ground somewhere [09:30] wallyworld: https://github.com/juju/juju/pull/4761 but for some reason rb didnt pick it, perhaps the conflict? [09:30] voidspace: realistically I think we should consider that middle ground, otherwise the potential of adding new bugs is a concern [09:31] frobware: I don't think hugely branching code is less likely to introduce bugs [09:31] that's my worry [09:32] anyway [09:32] morning [09:32] TheMue: o/ [09:42] voidspace, frobware, dimitern, dooferlad: would one of you please check the AddressAllocation test changes in jujud/agent, in http://reviews.vapour.ws/r/4110/diff/5/?page=2 ? [09:43] ashipika, by the way, I am deeply suspicious of what github thinks about that branch -- I am planning to just apply the correct diff to MADE-model-workers in a new branch -- will that inconvenience you horribly? [09:44] fwereade: nah, not really.. just give me the new branch and i'll rebase :) [09:45] ashipika, cool, just a heads up :) [09:45] fwereade: thanks! [09:45] fwereade: when do you expect this to land? [09:46] ashipika, I have a ship it, so hopefully today [09:46] fwereade: \o/ [09:47] ashipika, although that's just onto MADE-model-workers, which needs to be updated with latest trunk and then make CI happy [09:48] fwereade, looking [10:03] frobware: stdup? [10:03] voidspace: oops, sidetracked. omw [10:10] dimitern, for reference: I dropped the skipped tests, and added one test that enables the feature flag and checks there's an address-cleaner worker running alongside all the usual ones [10:10] dimitern, I called it address-cleaner because that seemed to be its main job, let me know if I misunderstood anything [10:33] fwereade, that sounds good to me [10:34] dimitern, ok, cool, will be submitting it as RB4225 in a few seconds then :) [10:37] fwereade, go for it :) [10:37] frobware, voidspace, there's the fix for those panics: http://reviews.vapour.ws/r/4226/ [10:58] dimitern: will stash the lxd changes and try your PR now [10:58] frobware, cheers [11:00] dimitern: do you know why in the container the device are not ifup'd? [11:01] dimitern: I now have 3 nics, but without an ifup they have no addrs [11:01] frobware, hmm no idea - it works like that for lxc [11:01] frobware, maybe something is blocking the networking job to finish bringing them up? ntpdate lock contention? [11:02] dimitern: will take a look after trying your changes [11:02] frobware, ok [11:03] frobware, btw I've realized we're lagging behind 2 major feature branch merges into master [11:03] dimitern: I think we need to decide whether it makes sense to create one singular network profile, or one profile per container [11:04] dimitern: this is an entropy judgement call [11:04] frobware, one per container seems to be the simplest option, considering the hwaddr needs to match [11:04] dimitern: in my mind we should get the multi-nic branch back to the state of maas-spaces2, CI-wise. [11:04] frobware, I'm looking into how bad the conflicts are [11:05] dimitern: then look at merging master [11:06] frobware, that's a good point [11:06] frobware, but if we lag too much behind it will only get harder [11:07] dimitern: I know. but equally adding the unknown is ... [11:07] dimitern: I think there's benefit in giving the OS folks a known state today [11:08] dimitern: yes, the hwaddr forces that decision. [11:09] frobware, I wouldn't rush to do a merge today exactly for that matter [11:09] http://paste.ubuntu.com/15413949/ [11:11] dimitern: so entropy level at lines 1..3624. :) [11:12] frobware, that's BS though as I did reset --hard FETCH_HEAD before, redoing it properly now [11:15] frobware, it's entirely not that bad: http://paste.ubuntu.com/15413972/ [11:17] dimitern: well, that's a lot better [11:17] frobware, it even builds, running make check now and a live test with the bundle [11:17] dimitern: I think we need to decide how to spend the rest of the day; polish the branch and give it to OS folks, or merge and polish [11:20] frobware, I think those two are not necessarily mutually exclusive [11:21] frobware, OS folks can test the maas-spaces-multi-nic-containers + my proposed fix and the workaround bouncing the MA of machine-0 [11:21] dimitern: ah, you still need to bounce the MA [11:21] frobware, yeah [11:22] frobware, if you're deploying containers on machine 0 [11:22] dimitern: right [11:23] dimitern: we can merge master; we can also give the OS folks a specific commit for testing. that would allow us to move on. thoughts? [11:23] frobware, we can give them a binary tarball to try, while we sync up with master (and assuming that does not cause a regression for multi-nic stuff) [11:24] dimitern: thedac seemed happy with checkout a commit id last time; it's also easier if we want them to move to a different commit [11:24] frobware, fair enough [11:24] frobware, but otherwise I think that sounds like the plan for today? [11:25] dimitern: great [11:25] dimitern: ah... one other thing... [11:25] dimitern: if we can first resolve the missing mac addr that would allow lxd end-to-end [11:26] frobware, right! [11:26] dimitern: can you look at that next? [11:26] frobware, I'll look into that when I'm back ~1h [11:26] dimitern: thanks! [11:26] ashipika, MADE-model-workers is up to date with the model-agent-integration work [11:26] and I'll leave the live test and make check going in the mean time [11:27] dimitern: I have to say this is why I don't want to merge master: http://reports.vapour.ws/releases/3776 [11:27] dimitern: multi-nic came from the tip of maas-spaces2 and that had only 2 failures [11:27] dimitern: we should really get back to that state first [11:29] frobware, well, we touched a lot of places in the multi-nic branch [11:30] frobware, voidspace, btw a review on http://reviews.vapour.ws/r/4226/ will be appreciated :) [11:30] * dimitern is out for ~1h [11:30] fwereade: that's a feature branch i presume? [11:32] dimitern: no issues with what we've changed; just for our own sanity bringing new stuff may make it harder to differentiate root cause analysis [11:34] ashipika, yeah [11:36] dimitern: getting closer with lxd: http://pastebin.ubuntu.com/15414021/ [11:38] dimitern: cannot login using juju ssh but can ordinarily. error fetching address for machine "0/lxd/0": private no address [11:41] voidspace: CI run 3777 is the drop-1.8 support. [12:01] frobware: ah, cool [12:02] frobware: lots blocked and a couple of build-binary failures [12:02] frobware: I'm looking at the failures [12:04] frobware: problems with lxc-start in the build binary [12:04] voidspace: log? link? [12:05] frobware: http://data.vapour.ws/juju-ci/products/version-3777/build-binary-wily-amd64/build-884/consoleText [12:05] frobware: same problem on build-binary-wily-arm64 [12:05] doesn't *look* like a problem with the code [12:05] the logs are "terse" though [12:06] voidspace: check status of master too [12:06] looking [12:06] frobware: they succeeded on master [12:07] frobware: my guess is that other tests will be gated on the binary build being successful [12:07] mgz: ping [12:07] voidspace: so it may depend on when you branched or did your checkout [12:08] frobware: was that a problem on master before? [12:08] voidspace: you would have to take a look back through the previous builds [12:11] frobware: failing to build the binary seems likely (to me) to be an infrastructure problem [12:13] voidspace: maybe mgz knows more [12:22] hey guys, i have a fresh xenial with a fresh juju 1.25.3. `ps -aef |grep juju` returns nothing. how do i get juju running, or figure out why it won't start? [12:35] never mind, got it going again with the juju-clean plugin, sorry for the noise [12:47] Bug #1559062 opened: ensure-availability not safe from adding 7+ nodes, can't remove stale ones [12:53] frobware, voidspace, back again [12:54] dimitern: welcome [12:54] frobware: that test run was useles, all the interesting tests were blocked by the build binary failure [12:54] frobware: waiting for comment on that from mgz [12:54] yep [12:54] voidspace, frobware, so I after merging master into multi-nic, make check passes, and the live test with the bundle still works [12:56] dimitern: if it was blessed I'm more inclined to say go with it. but it's not. [12:57] frobware, I'll propose it, but we don't have to merge right away.. [13:05] hi all - ? re: availability zones. if we have maas machines set in different zones, should that zone be observable to juju via JUJU_AVAILABILITY_ZONE? [13:07] dimitern: do you time to HO/sync? Can show you lxd stuff, some question re: mac addr and 'inet manual' [13:08] frobware, sure, just give me 10m [13:08] ok [13:09] dimitern: let's make it 30 past the hour. I'll grab a very quick lunch. [13:09] frobware, sgtm [13:10] pseudo morning all [13:32] odd .. I couldn't open google calender for a long time.. [13:32] dimitern: sounds like a productivity win [13:32] :D [13:33] frobware, I'm in today's standup HO [13:33] omw [13:47] Bug #1559099 opened: JUJU_AVAILABILITY_ZONE not set in MAAS [13:51] cherylj: are you around? [13:51] perrito666: yeah, what's up? [13:52] cherylj: silly question, actually two, 1) is master blocked on the restore bug and 2) where is the release note doc? I need to add a few bits [13:53] perrito666: 1 - Yes, but I committed the fix last night. Don't think that there's been a run on master since then. Let me think about unblocking... [13:53] perrito666: 2 - https://docs.google.com/document/d/1ID-r22-UIjl00UY_URXQo_vJNdRPqmSNv7vP8HI_E5U/edit [13:54] cherylj: tx x 2 [13:56] np :) [14:08] perrito666: did the keystone 3 support land? (I think I saw that it did?) [14:08] I didnt, lemme see if Ian land it [14:08] ah yes, it got jfdone [14:08] :p [14:09] k, will update the blueprint [14:09] ill update the release notes for all that landed these days [14:09] perrito666: awesome! [14:18] Bug #1559131 opened: unable to add-credential for maas [14:22] ericsnow: natefinch: good morning... final day! [14:22] natefinch: i see you got your PRs landed... grats! can you follow up with marcoceppi to see when they'll hit the deb? [14:23] katco natefinch I'm planning on building those today in about 3 hours [14:23] marcoceppi: rock! [14:23] awesome [14:23] marcoceppi: note that most of the commands will currently return errors b/c the charmstore endpoints don't actually do anything yet [14:24] katco: yeah, I'm going to coordinate with uros [14:24] katco: hopefully the store will be deployed today :\ [14:24] marcoceppi: no, i mean even after uros deploys they still do nothing. the code hasn't been written [14:24] katco: \o/ cool good to know I'll make sure when I announce that there is still backend code being worked on [14:25] marcoceppi: we are wrapping up some bug fixes in core, and then will be swinging onto implementing the charmstore stuff [14:25] katco: is it just anything with --resources ? [14:25] marcoceppi: yeah i think that's a fair representation [14:25] katco: awesome, thanks for the ehads up [14:25] hey katco, I've updated the release table on the feature tracker page to address some of your feedback. Can you take a look and let me know if it's a bit clearer now? https://private-fileshare.canonical.com/~cherylj/juju-features-20.html [14:25] marcoceppi: anything here prepended by "charmer" https://github.com/CanonicalLtd/juju-specs/tree/master/resources/features [14:26] cherylj: wow, i didn't even think to ask! ty that's great! [14:26] cherylj: very clear [14:27] Bug #1559131 changed: unable to add-credential for maas [14:27] excellent, thank you katco [14:27] cherylj: no, really, ty [14:31] are we actually writing markdown on a google doc? [14:32] perrito666: # katco's answer [14:32] perrito666: * Yes of course we are. [14:32] perrito666: * Why wouldn't we be? [14:32] perrito666: * It's md! Everyone knows markdown! [14:33] perrito666: * bullet point 4 [14:33] its a google doc, it supports 20th century formatting, but besides that is less than ideal as a medium for md [14:34] perrito666: i know, i'm joking ;) i would think our release notes would be checked into our repo actually [14:34] well apparently there is more people that knows md than git :p [14:36] Bug #1559131 opened: unable to add-credential for maas [14:43] md is a _nice_ simple format [14:43] perrito666: putting the release notes in git as MD would be amazing... PRs to add new sections, PRs for edits, you could actually see who added each piece etc [14:43] natefinch: +1 [14:43] perrito666: I thnnk it's in docs for collaborative editing purposes and low barrier of entry [14:43] perrito666: however, one would assume that people at a software company who use git all day would not be averse to using it a tiny bit more. [14:44] also... being able to preview your markdown to know it's doing what you expect would be nice. [14:44] nothing says low barrier like a nice google doc full of markdown... [14:44] lol [14:45] natefinch: hey, don't think i even need to ask, but: you'll have your current card wrapped up today? [14:45] google doc containing html as plain text with a pre section of markdown [14:46] katco: should be, year. It's touching more spots than I originally expected... there's a lot of layers of abstraction to add arguments to, but it's still pretty trivial [14:47] s/year/yeah [14:47] natefinch: cool... please also coordinate with marcoceppi to let him know eta for next build of charm deb [14:48] natefinch: i'sok... i read it in a pirate voice as "yar!" [14:48] katco: haha [14:50] natefinch: but seriously, work closely with marcoceppi on eta/progress, cool? [14:50] katco: yes [14:50] * marcoceppi elmira hugs natefinch [14:51] katco: I think I misunderstood the card... I thought I was adding channels to juju charm list-resources [14:51] katco: there isn't a charm list-resources command AFAIK [14:52] natefinch: gahhh you are correct and i am friday head [14:52] katco: oh good, I thought I was doing the wrong thing [14:52] natefinch: the card is incorrect... i'll put a juju out in front [14:52] katco: sounds good [14:52] natefinch: so the charm work you did already included channel support? [14:53] katco: yes, the UI team did the channel work there [14:53] katco: we just tacked resources on top of that [14:53] natefinch: ok [14:54] marcoceppi: really i just like pinging you. false alarm, we are truly done with the charm command now. [14:54] katco: I like feeling like I'm needed [14:54] :) [14:54] marcoceppi: and I like hugs :) [14:55] marcoceppi: also i had to look up elmira... wow there's some neurons that hadn't fired in like 15 years [14:55] katco: haha, yeah it's been a while but she def left an impression on me when I was younger [14:56] marcoceppi: aww, I misread it as elvira hug [14:56] * katco spits out coffee [14:56] natefinch: haha [14:56] that would be quite different [14:56] indeed :) [14:56] elmyra* is apparently the spelling you should be searching with [14:57] marcoceppi: on a complete tangent: the shirts from the charmer's summit are great. material is very soft and design is awesome [14:57] katco: thanks, we decided to them in house. we want people to actually like wearing them ;) [14:57] hehe [14:58] if i could get a zip-up juju hoodie i would be so happy [14:58] ditto [14:58] which means getting nice fabric [14:58] i think wwitzel3 made his own [14:58] that's good to know, we'll keep that in mind for pasadena [15:00] katco, natefinch: PTAL http://reviews.vapour.ws/r/4219/ [15:00] ericsnow: reviewing right now actually! [15:00] :) [15:00] ericsnow: looking [15:00] ta [15:01] ericsnow: natefinch: actually, since we're down to the wire here... just this once, can i ask that natefinch focus on getting his code written? [15:01] ericsnow: natefinch: i'd like to be able to merge master in by EOD [15:01] katco: np [15:02] ok [15:02] natefinch: ta [15:16] katco, marcoceppi: yeah, I took the hoodie to a alterations shop and they put a zipper on it for me [15:16] cost me $11 + the zipper (amazon for $4 iirc) [15:16] my wife said she'd do that for me, but I haven't gotten around to actually asking her to do it [15:17] mine sat in a closet unused until I put a zipper on it [15:17] what kind of uncivilized person uses a pull over [15:18] wwitzel3: lol, right? [15:18] well put wwitzel3 [15:18] natefinch: your wife can do any sewing job for you. always watching her posts, she's so good. [15:19] TheMue: Thank you :) She's pretty great, yeah :) [15:20] natefinch: absolutely [15:36] katco, do you know why resources/resourceadapters.WorkerFactory is creating a worker from a *State? [15:37] katco, (hi, by the way, sorry abrupt!) [15:37] fwereade: let me take a peek. ericsnow might have a quicker answer [15:37] fwereade: np at all lol, hi! :D [15:37] fwereade: I'll take a look :( [15:39] fwereade: gah, I'll fix that [15:39] fwereade: well there ya go. [15:40] katco, ericsnow, before you get too deep into that, I hit this because I'm doing pretty drastic things to the machine agent, and I'm not entirely sure how I should go about integrating that functionality [15:41] fwereade: ericsnow: is the problem that it's taking in a State pointer? [15:41] fwereade: ericsnow: and not an interface? [15:41] katco, that's several problems [15:41] katco, an interface would be nicer [15:41] katco, the upsetting thing is that it's a worker that's not going via the api layer [15:42] fwereade: right, the API part is what I need to fix [15:42] * ericsnow feels dumb for forgetting that mandate [15:42] ericsnow, would you take a look at the MADE-model-workers branch please? [15:43] fwereade: will do now-ish [15:43] ericsnow, in particular, cmd/jujud/agent.MachineAgent.startModelWorkers; and the cmd/jujud/agent/model package [15:44] fwereade: and sorry for not getting you that review yesterday (model agent integration) [15:44] fwereade: I was actually just reading through it [15:45] ericsnow, because I am reluctant to make the tests for those packages dependent upon what other packages might have been imported, and it feels like the approaches are at an impasse [15:45] ericsnow, no worries, I am keen to hear your thoughts on it but you weren't blocking me [15:45] fwereade: k [15:45] fwereade: is this the whole "imports have a side-effect" conversation? [15:46] fwereade: via init [15:46] katco, yeah [15:46] fwereade: to my knowledge we are not doing that, if that helps. [15:47] katco, well, then it's the "logic dependent upon mutable global state" one [15:47] fwereade: ah, that we are currently doing :) [15:47] fwereade: although i'd characterize it as "package level state" [15:48] katco, IMO that makes it marginally more tractable, but little less evil [15:49] fwereade: but i don't understand how the registration pattern makes writing tests harder? or what that has to do with imports? [15:49] fwereade: oh, you better take a look at our feature branch https://github.com/juju/juju/blob/feature-resources/resource/resourceadapters/workers.go [15:49] fwereade: I've since merged that worker (in master) with the charmrevisionupdater worker [15:50] fwereade: (Ian's idea) [15:50] katco, well, I have this Manifolds() func, which returns a representation of the workers we need to run per-model in a controller; I would like to be able to test it, and have confidence that it will do the same thing whatever else may or may not have been called or imported [15:50] fwereade: the integration between the two takes place in the API server so no workers are involved [15:51] fwereade: and yeah, I really don't like the registries and would love to talk with you about what I think is the sensible atlernative [15:51] ericsnow, well... that worker is sitting right there on master afaics? https://github.com/juju/juju/blob/master/resource/resourceadapters/workers.go [15:52] fwereade: sorry for the 2 convos at once. but can't you? your tests have the global view of manifolds in the context of your tests. it should be testing that it does the right thing when you register workers [15:52] fwereade: right, we haven't merged that part of our feature branch back into master yet [15:53] ericsnow: actually i am confused as well... the worker is present in the feature branch. you're saying we merged that, but haven't yet deleted it? [15:53] ericsnow, katco: so, parenthetically, it is really not a good thing that that code ever landed on a feature branch, let alone made it into master :( [15:53] ericsnow, katco: but I am much more interested in talking about the tests [15:53] katco: ??? https://github.com/juju/juju/blob/feature-resources/resource/resourceadapters/workers.go [15:54] ericsnow: i am missing something. i'm looking at the same worker which takes in state [15:54] katco, if I need to pay attention to ensuring that the context of my tests matches the context at runtime, I will almost certainly screw it up at some point [15:55] katco: there's no worker there; compare with master: https://github.com/juju/juju/blob/master/resource/resourceadapters/workers.go [15:55] katco, if I write my SUT such that all its dependencies are supplied explicitly, I can have much greater confidence that the tests will remain useful in the long term [15:55] ericsnow: is this not instantiating a worker? https://github.com/juju/juju/blob/feature-resources/resource/resourceadapters/workers.go#L21 [15:56] fwereade: you don't have to make sure it aligns with the context of runtime, just that the path is tested [15:56] katco: correct [15:56] ericsnow: so, in our feature branch, we have a function that takes in state, and returns a worker looking at state? [15:57] katco: it returns a "latest charm handler", not a worker [15:57] fwereade: i.e. register a "foo" worker, does Manifolds() return it? PASS: we successfully know about registered workers [15:58] katco, ericsnow: how can you possibly write such a registration func without magical dependencies across all the components in play? [15:59] fwereade: well wait, let's resolve the testing line of questioning [15:59] katco, ericsnow: I don't think you can usefully register a manifold without secret knowledge of the names of the workers defined in agent/model [15:59] katco, ok [15:59] ericsnow: that is perhaps misleading then. it sits in a "workers" package [16:00] katco: agreed [16:00] katco: it's left over from when it was a worker [16:00] fwereade, is it possible for a relation hook to fire before both services are installed? even if one of those services is a subordinate? [16:00] katco, ericsnow: (either way, the only things that should be using a *State should live in the apiserver) [16:00] ericsnow: ahhh ok this makes sense [16:00] fwereade: agreed [16:00] cmars, it should not be [16:00] katco, ok, back to the tests [16:00] fwereade, thought so. might be an old juju, i'll find out [16:00] thanks [16:00] * katco listens [16:01] katco, I am not interested in testing a registration mechanism: I am interested in testing what workers will be started by a model agent [16:01] katco, the mutable global state makes that unknowable [16:01] fwereade: ah, i see your plight now [16:02] dimitern: meeting [16:02] voidspace: ^^ [16:02] fwereade: if we were aligned, this would be a test that would live somewhere in component/... because that's where the list of workers would get passed in [16:03] fwereade: coding the list in the dependency engine is like hard-coding the state i think [16:04] katco, I am not sure that testing that behaviour in component/ is notably better that testing it under agent/model/ [16:05] katco, a model agent has a number of responsibilities, and those responsibilities have non-trivial interactions [16:06] fwereade: well, i think it's because in the manifold, you test that the registration works. then where the registration happens, you test that your list is complete [16:07] katco, restate please? [16:08] fwereade: sure, let me try and think of a different way [16:08] fwereade: so, starting from these axioms: [16:08] katco, fwereade: I think that testing it where we are is correct; but we should be testing which manifolds will be run (not which workers) [16:08] fwereade: 1. unit tests should test 1 thing at a time [16:09] fwereade: 2. the combination of all unit tests trends towards correctness [16:10] fwereade: if your goal is to test that the workers we expect to run, are [16:10] katco, (1) concur (2) strongly disagree, quantity of tests does not imply quality of tests [16:10] fwereade: that's not the point of 2... another way is to say: the more correctly written unit tests you have, the greater confidence you have the system as a whole works [16:11] fwereade: in other words it's not about the correctness of the test, it's about the combination of correct tests stating something about the system as a whole [16:11] katco, and now we tangle on "correctly written", I fear [16:11] fwereade: no, i don't think so. it doesn't matter. for any value of correct. [16:12] fwereade: if it correctly tests the 1 thing the test is supposed to test [16:12] katco, still unconvinced, I think some tests have negative value [16:13] fwereade: agreed, in overhead. not in emergent correctness of the system [16:13] katco, on the contrary [16:13] katco, a little while ago I found tests for the machine agent that checked it called some api method on some facade [16:13] katco, which (1) it shouldn't have done in the first place [16:14] frobware: oops, sorry [16:14] katco, and (2) it actually didn't anyway, because someone had tweaked the test setup to make that call at the right time, so the test merely tested that the test setup had run [16:15] fwereade: ok, so that's fails both clauses right? it did not correctly test the thing it was supposed to test, and it wasn't supposed to test it [16:15] fwereade: strawman [16:16] mgz: ping [16:16] voidspace: yo [16:17] mgz: the drop-maas-1.8 CI run failed on build binary, which meant everything failed [16:17] fwereade: if your goal is to test that the workers we expect to run, are. first thing you should do is test that when workers are registered, the manifold knows about them [16:17] fwereade: the second thing is that you register the right workers [16:17] mgz: the logs are extremely terse - it just says that lxc-start failed [16:17] voidspace: that's fixed [16:17] mgz: so it was a problem with CI, not with the branch? [16:18] lxc update broke wily [16:18] fwereade: through emergence, you have now tested that the workers we expect to run, are [16:18] katco, you seem to be arguing from a position that assumes that we should depend on mutable global state in the first place [16:18] mgz: ah, damn [16:18] sinzui had to roll back the package [16:18] but the testing is happening for realy [16:18] fwereade: yes, i prepended all of this by saing "if we were aligned" [16:18] mgz: so we'll get a run in due course [16:18] mgz: thanks [16:19] frobware: ^^^ [16:19] katco, ok, I did not think that "globals are bad mmmkay" was a controversial position [16:19] fwereade: channeling mr. mackey there :) [16:19] katco, absolutely :) [16:20] fwereade: but we could just pass the list in... doesn't have to be global [16:21] katco, right, and if we did, that would certainly be more testable and less upsetting [16:21] dimitern: so with your patch, some fiddling with the lxd code and adding an 'ifup -a' as a new run command in cloud-init the lxd containers' interfaces are all up! [16:21] fwereade: note that in our feature branch that worker registry is gone [16:21] fwereade: under the tests i have laid out, it would be no easier to test. but yes, less upsetting [16:22] katco, but we're still talking about a set of interdependent components [16:22] dimitern: first container without `ifup -a', second with. http://pastebin.ubuntu.com/15415766/ [16:23] katco, the agent/model package is about defining the dependencies and interactions between them [16:23] katco, and many parts of that are internal details [16:24] katco, the api caller might be called "api-caller", but if you depend on that from half a docebase away Bad Things will happen [16:24] fwereade: and we arrive at the real point of contention: IoC vs. not [16:24] frobware, awesomesauce! [16:25] and as if to signal something, my cat just puked at my feet [16:25] brb [16:25] fwereade: ideally I'd like to eliminate all the global registries (and component/all) and accomplish the same thing in the correct places (under cmd/juju, cmd/jujud, etc.) [16:25] dimitern: we should sync with stefan (and raise some bugs to track) for some of the timing issues and for the cases where the interfaces don't come up. [16:25] katco, I think I am in favour of IoC... do I seem not to be? [16:26] brb also, talk when you're back [16:26] frobware, yeah [16:26] fwereade: the challenge is refactoring code so that we pass the necessary "registries" around [16:29] ericsnow, this is absolutely true === natefinch is now known as natefinch-lunch [16:30] ericsnow, and I think it's more or less what I've been working towards with all the dependency.Engine work [16:30] fwereade: exactly [16:30] fwereade: that's what helped the concept click for me :) [16:31] ericsnow, cool :D [16:37] katco, is it possible to select a lxd profile when deploying, by using constraints=... ? [16:38] cmars: i don't believe so. jam and tych0 have been doing the latest work on this though [16:38] cmars: not likely (unless jam or tych0 have added that) [16:38] katco, ericsnow ok thanks [16:39] not that i know of :( [16:40] would such a thing fit into constraints? dang it'd be useful to be able to do that -- adding bind mounts stuff, or making containers privileged [16:40] i'd consider hacking away on this... i know y'all are busy [16:40] cmars, that feels like a placement directive? can be passed through to the provider [16:41] cmars: you can still kind of get this behavior by changing what image ubuntu- points to [16:41] frobware, [16:41] frobware, sorry wrong window :) [16:41] cmars: provider/ec2/environ.go:370 for an example [16:42] haven't seen the placement stuff yet, yeah, that makes sense [16:42] cmars, constraints want to be generic, placement is for provider-specific trapdoors [16:43] cmars, generally accessed via --to [16:43] katco, so, IoC? [16:44] fwereade, that'd put all the provisioned machines in the same profile.. which would work, but what I really want is to deploy one service into a privileged container while leaving the others unprivileged [16:44] fwereade: oh, right... sorry [16:44] cmars, deploy myservice --to lxd:profile=myfancyprofile? [16:44] katco, np [16:45] oh, works on deploy as well as bootstrap. ok, nice! [16:45] katco, I *think* we're both in favour of ioc? [16:45] fwereade: no you don't seem to be against IoC. but hard coding things does not seem to be IoC. maybe i've missed the point being that you pass around manifolds? [16:47] katco, the way I see it, a Manifolds func is explicitly responsible for accepting some sort of config that applies to the responsibility it holds, and returning some set of workers that will meet those responsibilities [16:48] katco, if the set of workers were independent, I would agree that a dynamic registry would probably be ok [16:49] fwereade: so the crux of your argument is that there is a central spot to codify that dependency graph, and you can't test that if outsiders can register things [16:49] katco, basically, yes [16:50] katco, and it makes me sad that code changes many packages away could, e.g. induce a cycle and render the whole lot useless [16:52] fwereade: tbh i would have to read through the code again to comment on that point [16:54] katco, I think it's a broadly applicable argument? I want local changes to cause local test failures, and the bigger/more-integrationy the test the more distance-risk I take on in exchange for greater certainty about macro behaviour [16:54] ...er, if that makes any sense at all outside my own head [16:54] frobware, I solved the case of the missing mac address [16:54] great [16:55] fwereade: FWIW, I agree that "configuration" of the application, which is how I see this, is ideally all in one place [16:55] fwereade: which is what I think you are arguing for [16:55] frobware, it turned out much to my surprise that `x := y` is not the same as `x := make([]T, len(y)); copy(x,y)` [16:55] katco, at a high level, yess -- I think the problem I am solving with dependency.Engine is "nobody understands what workers are running at any given time" [16:56] katco, I have been working towards making that more explicit [16:57] fwereade: yeah i agree with that [16:58] katco, so, from that perspective, you can see why I'm having trouble with components [16:59] fwereade: oh yes, certainly! i think i'm arguing from some kind of ideal future state that doesn't actually exist yet [17:00] fwereade: i don't feel the problem is intractable [17:00] katco, would you consider it a slur if I accused the component approach of being reminiscent of aspect-oriented programming? [17:00] fwereade: but i acknowledge and respect that you are much closer to the problem than i am, so might have a more valid opinion :) [17:01] fwereade: no not a slur... i don't understand the comparison though [17:01] katco, because it feels to me like it has similar strengths (clear separation of unrelated concerns) and weaknesses (pain when those concerns turn out to intersect in subtle ways) [17:02] katco, just a thought, may be nonsense, probably not a useful comparison if it doesn't immediately speak to you [17:02] fwereade: mm... i think it's different in that we're trying to advocate codifying those interactions instead of letting them happen naturally [17:03] fwereade: i.e. if i decorate a method with a bunch of attributes, i may not know how it will act, but if i write a function that says X interact with Y, i can write a test against that [17:04] fwereade: i suppose i can write a test for how attributes are combined as well [17:05] fwereade: hey, i love talking about this stuff (and it's important to continue doing so) but i have stuff i need to take care of for the release [17:05] dimitern: so it was lost in the pointer copy from before? just impl/patch didn't actually fix it? [17:05] katco, likewise -- and I'm approaching EoD -- but I do need to figure out how to weld these two parts together if I want to merge MADE-model-workers [17:06] fwereade: when do you need that merged? [17:06] katco, heh, now it's done, as soon as I can, but I know there's no shortage of competition [17:07] frobware, I'm not sure what was wrong with the patch, but using make + copy vs := worked [17:07] fwereade: so you're aiming to get it in for 2.0? [17:07] katco, last I saw it was on the list [17:07] frobware: when will you have confirmation of the induction sprint dates? [17:08] fwereade: well if it's just 1 test left, you could land the branch as is and fix the test before the ~8th [17:08] voidspace: alexis raised the req for april 18->22 - I haven't seen or heard anything to suggest that it will not be those dates [17:09] frobware: ok, cool - thanks [17:09] katco, well, it's more than that -- it's that it's a replacement of the per-model workers [17:09] voidspace: it's mine and dooferlad's birthday that week, so it's a rave! [17:10] fwereade: oh, so functionality is blocked as well? [17:10] katco, and, well, there's this *State-dependent thing that's just turned up and is partly there because the magic registration mechanism let the implementer avoid seeing the WTF DO NOT DO THIS comment in startEnvWorkers [17:10] katco, yes, my branch will not currently run that worker [17:11] ericsnow, katco: would love some feedback on this if you have bandwidth: https://github.com/juju/juju/pull/4789/commits/417c8fd67b40eac734a815707104bfc3c8657af5 [17:11] ericsnow, katco: I haven't looked at the lxdclient before, but was trying to make multi-nic containers work with maas/spaces [17:11] frobware: hehe, cool [17:11] katco, and I would like it to, but adding package-level component registration to agent/model is very much at odds with the purpose of the package [17:12] fwereade: ok, well then we need to discuss syncing up this weekend to meet monday's deadline [17:12] fwereade: don't forget that once feature-resources is merged back in, the whole model-worker-registry thing is gone [17:12] frobware: will try and tal in a bit [17:13] ericsnow, so it's just charmrevisionupdater doing both jobs, and using the api and everything? that seems sane [17:13] fwereade: right [17:13] katco: appreciated [17:13] ericsnow, ok, then that is great [17:13] fwereade: it was Ian's idea :) [17:13] fwereade: unblocked? [17:13] ericsnow, yeah, they're pretty much the same job [17:13] frobware: we could make it literally a rave... http://www.fabriclondon.com/club/listing/1238 [17:13] katco, not sure [17:13] dimitern: http://www.fabriclondon.com/club/listing/1238 [17:14] fwereade: yep :) [17:14] katco, don't think I can merge without a functioning worker in the short term [17:14] katco: we may land that on maas-spaces-multi-nic-containers anyway to drive through any integration issues. [17:14] fwereade: it may be worth cherry-picking the particular merge commit from the feature-resources branch? [17:15] frobware: seems reasonable, but appreciate the heads-up. in that same light, heads up tych0 and jam ^^^ [17:15] voidspace, that's a great idea actually [17:15] dimitern: :-) [17:16] ericsnow, it very probably would, I will have a look for that [17:16] voidspace, haven't been to a party like that in london, so count me in :) [17:16] ericsnow, katco: so, yes, unblocked, thank you :). [17:17] ericsnow, katco: but I think this is a near-miss and we'll have worse interactions before too long, so let's talk more about this right after the release madness [17:17] fwereade: I believe it is 8758089b35c3120b52b10da6d93514ac0d992f6f (PR #4539) [17:17] fwereade: yes, totally agree! [17:18] fwereade: +1 [17:18] fwereade: we will be doing a retro on the component oriented approach at our next sprint [17:18] ericsnow, katco: cool [17:18] fwereade: and i'm hoping we can get everyone to close their laptops and participate :) [17:18] katco, whole-team sprint? excellent, +1 [17:18] fwereade: a juju core sprint === natefinch-lunch is now known as natefinch [17:39] arg... I really really really wish that channel was part of a charm.Url [17:39] natefinch: +1 [17:41] so... uh... do we store the channel of a charm that a service is using? [17:42] natefinch: I doubt it [17:43] ericsnow: so, how do we know what channel to look in for updates? [17:43] natefinch: though I expect we should have it for the charmrevisionupdater worker [17:43] natefinch: yep [17:43] ericsnow: that's what I'm looking at right now [17:43] ericsnow: since LatestCharmInfo now requires a channel [17:47] fwereade, rick_h__: either of you know if juju-core respects channels at all, and if so, how? Seems like we need to remember the channel from which we deployed a service... but I don't see us actually storing that anywhere [17:49] natefinch: ericsnow: i would have expected that to be part of the work of landing --channel into the deploy command [17:49] katco: +1 [17:49] katco: there's no mention of channel in deploy.go: https://github.com/juju/juju/blob/master/cmd/juju/service/deploy.go [17:50] natefinch: ericsnow: when we discussed yesterday, it sounded like the patch hadn't been landed yet [17:51] katco: I guess I'll just hardcode stable and we can fix it when that other stuff lands [17:52] katco: correct, still not in master [17:52] natefinch: that's consistent [17:54] natefinch: no, it's on the list but it's not been done yet. [17:54] natefinch: agree with the assertion that we need to remember what channel the services is following [17:54] voidspace: latest drop-1.8 looks better. [17:54] rick_h__: sounds like rogpeppe2 is doing it. can we expect his patch to store that somewhere and update the existing resources code to take advantage of it? [17:55] frobware: ah, cool [17:55] katco: I'd hope so yes. However, I've not seen the code/etc to be 100% on it. [17:55] voidspace: all have existing bugs registered against the failure and one was a timeout [17:56] frobware: there's a xenial uniit test failure based on ordering [17:56] frobware: I bet that's a dictionary iteration order change [17:57] ah, existing bug [17:57] frobware: yeah, that's much better [17:57] frobware: when is the plan to land this? [18:01] rick_h__: do you know why we didn't just add a channel field to the charm.Url struct? it's hugely disruptive to the codebase to change basically everywhere we take a charm.Url to now take a charm.Url + channel [18:01] natefinch: I'm guessing because a charm can have more than one? e.g. a single charm revision can be development, then made stable, etc [18:01] natefinch: but I'm only guessing, I'm not in the implementation calls right now and EU folks would know better than I [18:02] rick_h__: ok, no problem. Just wondered if you were there for the discussion. [18:03] natefinch: there should be a channel flag on upgrade-charm too, correct? [18:04] katco: natefinch yes, but only with --switch to be clear it's changing what it's tracking [18:04] ^ that [18:04] k [18:05] in general the channel shouldn't change, and therefore only needed at deploy time [18:05] (and even then I presume we'll default to stable, so usually people won't ever need to think about channels) [18:05] natefinch: correct [18:06] natefinch: channels are a pro-use only tool, hidden from normal users [18:06] Bug #1559233 opened: juju run gets 'Permission denied (publickey)' in models other than the controller model [18:11] voidspace: whenever we can get a clean CI - death to long lived feature branches [18:15] frobware, here it is: https://github.com/dimitern/juju/tree/multi-nic-master-fixes [18:16] dimitern: looking [18:16] frobware: cool [18:17] dimitern: the bridge script tests fail in multi-nic-containers atm [18:19] frobware, really? [18:19] dimitern: ok, going to push your branch as maas-spaces-multi-nic-containers-with-master. Viva la tab completion. [18:19] dimitern: was a bit surprised too [18:20] frobware, great! thanks [18:20] dimitern: ah, no. sorry it was: FAIL: container_userdata_test.go:120: UserDataSuite.TestNewCloudInitConfigWithNetworks [18:20] frobware, ah, yeah - that's fixed in the branch above [18:20] dimitern: bleh. too much going on. [18:21] :) [18:23] dimitern: confirming tip of your branch is 65322f6d483ab0ea6e19fc466af9b60096f5174e [18:23] frobware, just a sec [18:23] frobware, I'm waiting for the final make check to pass first [18:24] frobware, and it did find one issue - I'll fix it and push an update [18:24] map ordering related, not in our code, but still - http://paste.ubuntu.com/15416689/ [18:26] anyone recognize this: provider/dummy/environs.go:316: "github.com/juju/testing".MgoServer.Reset() used as value [18:27] dimitern: I've seen a bug for that [18:28] dimitern: tip is now 0cd8eb5b02a9430e31430b65acff595e7dfe0f71? [18:28] frobware, and I have included, but perhaps the last cherry-pick reverted it? dunno [18:29] ? [18:29] frobware, yep, I was just about to paste the commit [18:29] dimitern: what did you revert? [18:29] nvm, looks like a dependencies problem... because of course it is [18:29] frobware, I mean https://github.com/juju/juju/pull/4787 might have caused the test failure [18:30] which I cherry-picked from master [18:30] right [18:30] ok, pushing now. [18:30] +1 go for it [18:31] dimitern: see natefinch's comment about deps. didn't you see something there today? [18:32] frobware, https://github.com/juju/juju/pull/4759 fixed the map ordering issue in 2 out of 3 providers that have it - i.e. ec2 still have it after that PR, and my last commit fixed ec2 as well [18:33] no, I was having issues with dependencies.tsv after I tried to foolishly rebase after I did merge master [18:34] dimitern: * [new branch] maas-spaces-multi-nic-containers-with-master -> maas-spaces-multi-nic-containers-with-master [18:34] I just had to update my dependencies.tsv to point to the head of juju/testing... not sure how I got the code change in core and not the deps change... might have been a simple merge problem [18:36] merging dependencies.tsv starts to get so painful it needs tooling around it [18:36] and I'm not just talking about godeps -N [18:36] frobware, awesome! [18:37] dimitern: calling it a week, my lxd patch/branch is out there if you want to take a look, perhaps merge into one of our branches. :) [18:39] frobware, I'll take care of it [18:39] frobware, :) enjoy the holiday [18:39] dimitern: I'll sort out some stuff for Christian tomorrow morning, but I will be incognito thereafter. \o/ [18:41] ok, beer o'clock [18:41] dimitern: hm, me too [18:42] :) cheers mgz [19:11] ericsnow, I'm confused about the various Connect methods around charmcmd, and what if anything should be using a *cmd.Context [19:11] ericsnow, is that something that came after that commit? [19:12] brb rebooting [19:14] fwereade: in cmd/juju/charmcmd/store.go? [19:14] ericsnow, yeah, and extending into resource/ I think? [19:15] fwereade: that was cmars team trying to stop needing to launch a browser to sign in i think [19:15] fwereade: that [19:17] ericsnow, missed some of the scrollback there, just came back from a reboot. *cmd.Context was needed in a few places for terminal interaction, to log in on the command line [19:17] fwereade: ^^^ [19:18] natefinch: https://github.com/CanonicalLtd/juju-specs/blob/master/resources/features/charmer-query-charm-metadata.feature#L32-L41 [19:20] ericsnow, katco, cmars, thanks, I'll see what needs to go where :) [19:20] natefinch: i think we missed this. list-resources is both on the charm command and the juju charm command [19:20] katco: damn [19:21] katco: I mean, of course that makes sense [19:21] natefinch: finish up the juju side of things and we'll talk [19:21] katco: it's pretty easy to code up, at least [19:21] katco: kk [19:22] marcoceppi: i just can't stop pinging you [19:23] marcoceppi: we messed up. we actually do have 1 more thing to land into the charm command [19:23] to the tune of "I can't stop loving you"? [19:23] prtty sure that's a song [19:23] lol [19:23] lol [19:23] * natefinch hi-5's mgz [19:23] to the tune of the CSI theme? [19:26] katco: https://www.youtube.com/watch?v=mzE8cyu9Vf8&t=1m4s [19:29] * perrito666 is overninetied by the video [19:29] https://www.youtube.com/watch?v=QkM-r4ZdX1o&list=PL0Yz4dINw_VIJ0bkHJe3ZV5ktsFsYxL2K [19:31] pea sized hail falling here [19:31] katco: good choice for coding brand new crap on the day of the deadline [19:32] natefinch: you know it. rage against that machine baby [19:36] Bug #1559277 opened: cannot set initial hosted model constraints: i/o timeout [19:49] can I force the deletion of an old lxd controller? [19:49] perrito666: with lxc delete --force [19:52] katco: since you are in it, what was the new juju destroy --force? [19:52] in it == answering my stupid questions [19:53] perrito666: oh thx for clarification :) uh i think destroy-controller although i don't see a --force [19:54] I recall sinzui mentioning something else the other day [19:56] perrito666: juju kill-controller will call lxd if the controller wont shutdown [19:56] tx [19:56] I insist, destroy sounds much harder than kill [19:58] dunno, destroy sounds like kill then burn the body [19:59] ERROR cannot obtain bootstrap information: Get https://10.0.3.1:8443/1.0/profiles: x509: cannot validate certificate for 10.0.3.1 because it doesn't contain any IP SANs [20:02] natefinch: where is the new location for the charm command patches? [20:02] katco: github.com/juju/charmstore-client [20:02] sinzui: ever had that fun error ^^ [20:02] perrito666: I have that one one of mine [20:03] natefinch: ? [20:03] perrito666: I think it's the old lxd not playing well with new lxd [20:03] natefinch: any clue on how to solve it? [20:03] perrito666: I haven't had time to poke at it yet [20:03] perrito666: not woth 2,0. I have seen is with 1.18 and 1.20. I recall x509 errors were associated with clock skew. [20:05] I clearly have the wrong lxd version [20:06] which one should I have [20:08] perrito666: well... you may have the right lxd version with an old lxd environment still running from the old lxd [20:08] perrito666: I think that's the case for me [20:08] natefinch: I cannot create a new one so I think I have the wrong version [20:08] ERROR invalid config: can't connect to the local LXD server: json: cannot unmarshal number into Go value of type string [20:09] perrito666: ahh, yeah that is definitely the "you have the wrong lxd" error [20:09] perrito666: I think that means you have the one from the ppa... remove the ppa and remove lxd then install the standard one [20:09] hence my question, what is the right version :) [20:10] apt-get install that is [20:13] Bug #1559280 opened: creating hosted model config: opening model: endpoint: expected string, got nothing [20:13] Bug #1559285 opened: creating hosted model config: opening model: storage-endpoint: expected string, got nothing [20:16] ericsnow, do you recall why charmcmd.CharmstoreClient became *charmstore.Client? [20:18] fwereade: not precisely but I expect it was due to use elsewhere -> put in a common place [20:24] natefinch: ericsnow: fyi i'm working on the list-resources subcommand of charm [20:24] katco: k [20:24] katco: cool [20:24] katco: too bad we can't just use the same implementation in both commands, since in theory they should be identical [20:25] natefinch: i was thinking it would be ideal if we could somehow share the formatting [20:29] natefinch: how did channel end up coming along for the ride in the charm command? looking at attach as the example [20:29] katco: channel is set on the csclient.Client [20:30] natefinch: but how does it get passed to that through the command? [20:30] natefinch: e.g. charm attach --channel unpublished foo [20:31] natefinch: i think i see... when the csclient.Client gets instantiated it takes in a *cmd.Context [20:33] katco: attach doesn't use channels, publish does, though [20:33] natefinch: ah ty [20:34] Bug #1559293 opened: show-controller fails [20:46] katco: I can't release anyways, charm store is still not deployed [20:47] marcoceppi: kk we will keep you posted... coding remaining bit up right now, but then has to be reviewed, etc. we can still land on monday, yeah? [20:47] marcoceppi: do you happen to know a charm that has status-history implemented? [20:47] perrito666: isn't status-history a CLI command? [20:47] I meant update-status [20:47] katco: yeah, but I don't like the closenes [20:48] perrito666: sure, a lot of layers have it, but none are like "traditional" charms [20:48] marcoceppi: me either [20:48] * perrito666 is really having problems with this "dont drink coffee stuff" [20:48] marcoceppi: dont care, I need to check on the excessive verbosity issue and need a charm with it installed [20:49] perrito666: sure, any of the big-data ones use it. let me get you one [20:49] tx [21:02] ericsnow: standup time [21:04] Bug #1559299 opened: cannot obtain provisioning script [21:04] Bug #1559305 opened: Process exited with: 1. Reason was: () [21:04] Bug #1559310 opened: bootstrap fails: "model is not bootstrapped" [21:16] Bug #1559313 opened: admin-secret should never be written to the state [21:30] trivial review anyone? http://reviews.vapour.ws/r/4233/ [22:19] Bug #1559329 opened: Relation information is duplicated in status tabular format [22:31] Bug #1559329 changed: Relation information is duplicated in status tabular format [22:34] cmars: you are not here still, are you? [22:41] ouch I fixed a duplicate bug [22:42] cherylj: nice catch [22:43] Bug #1559329 opened: Relation information is duplicated in status tabular format [22:43] there is some irony there, a bug about duplicate status is a duplicate [22:43] heh [22:44] perrito666, did i open a duplicate bug about duplicates? [22:44] you did [22:44] maybe i should call it a week :\ [22:44] and I fixed it, so if you want to check the 3 line fix before leaving Ill fix it [22:44] merge it* [22:44] oh, awesome, got a link [22:44] http://reviews.vapour.ws/r/4235/ [22:45] I was literally playing with vim syntax checkers with that file :p [22:46] perrito666, looks good, but probably worth a test [22:47] perrito666, for a followup? [22:48] definitely, tests in status take a couple of hours to write [22:49] status tests are like a coding speedbump [22:49] Bug #1559329 changed: Relation information is duplicated in status tabular format [23:54] cherylj: sinzui: either of you still around? easy-peasy question/comment