[00:04] <alexisb> axw, ping
[00:31] <thumper> fwereade_: you working?
[00:34] <alexisb> thumper, I just sent him that same question over mail
[00:35] <thumper> heh
[00:35] <thumper> I saw the push
[00:42]  * thumper stares evily in wallyworld's direction
[00:42]  * thumper mutters something about renames
[00:43] <thumper> wallyworld: does the client facade not exist any more?
[00:43] <wallyworld> thumper: it does, but has gone on a diet
[00:43] <wallyworld> it also is version 1
[00:43] <thumper> where are the broken out facaces?
[00:43] <wallyworld> all of the service methods are moved off to the service facade. before only half the methods had been moved across
[00:44] <wallyworld> not all facades are broken out
[00:44] <wallyworld> just the existing service facade had the remaining methods moved
[00:44] <wallyworld> not enough time to do everything else
[00:44] <thumper> hmm...
[00:44] <wallyworld> i can ask superman to make the earh spin slower
[00:45] <wallyworld> so we get 30 hours in a day
[00:45] <wallyworld> i thought it best to complete the half done work for serivce
[00:45] <wallyworld> remaining stuff will just have to wait
[00:45] <thumper> wallyworld: can we have a call?
[00:46] <wallyworld> sure
[00:46] <thumper> I think talking through what I need will be heaps faster
[00:46] <thumper> wallyworld: lets jump in our 1:1
[01:06] <axw> alexisb: pong, sorry, was doing school stuff
[01:07] <alexisb> axw, nws no rush, I am off for the day I will catch you next week
[01:07] <axw> alexisb: okey dokey, have a nice long weekend. ttyl
[01:07] <anastasiamac_> alexisb: have fun :D
[01:08] <anastasiamac_> axw: did u have a chance to see my msgs?
[02:09] <anastasiamac_> axw: wallyworld: can't bootstrap on FB at the moment - have to specify both controller name and cloud..
[02:10] <axw> anastasiamac_: in 1:1, I'll help you later
[02:10] <anastasiamac_> k
[02:19] <thumper> wallyworld: \o/
[02:19] <thumper> wallyworld: the test to check all the read only calls found two that were wrong
[02:20] <mup> Bug #1542127 opened: CPC sjson triggers failed to parse public key: openpgp: invalid argument: no armored data found <bootstrap> <ci> <simplestreams> <juju-core:Triaged> <https://launchpad.net/bugs/1542127>
[02:20] <wallyworld> thumper: awesome, yay, i look forward to it landing so i can adjust those apis names
[02:23] <natefinch> cherylj: re: min juju version... uh... in theory all it needs is to merge master into the feature branch and run through CI (and fix whatever falls out of that). However, we're also busting our butts to get resources in, so I don't know if I'll have the time.  I'll certainly try.
[02:23] <mup> Bug #1542127 changed: CPC sjson triggers failed to parse public key: openpgp: invalid argument: no armored data found <bootstrap> <ci> <simplestreams> <juju-core:Triaged> <https://launchpad.net/bugs/1542127>
[02:29] <axw> anastasiamac_: "juju bootstrap <controller-name> lxd --upload-tools" should just work
[02:30] <anastasiamac_> axw: tyvm... i'll look in a sec
[02:32] <mup> Bug #1542127 opened: CPC sjson triggers failed to parse public key: openpgp: invalid argument: no armored data found <bootstrap> <ci> <simplestreams> <juju-core:Triaged> <https://launchpad.net/bugs/1542127>
[02:36] <thumper> axw: do we now create two models when bootstrapping?
[02:36] <thumper> axw: is that bootstrap in master, or a feature branch?
[02:36] <axw> thumper: not just yet
[02:36] <axw> thumper: feature branch
[02:36] <thumper> k
[02:36] <thumper> looking forward to it though
[02:36] <thumper> looking forward to it though
[02:36] <thumper> ugh
[02:36] <axw> thumper: I'll let you know when it's ready, if you want to play
[02:36] <thumper> cheers
[02:50] <mup> Bug #1542131 opened: Juju ignores index2.sjson in favour of index.json <bootstrap> <ci> <simplestreams> <juju-core:Triaged> <https://launchpad.net/bugs/1542131>
[03:06] <cherylj> natefinch: it's no problem to move min juju version to beta 1.  Just wanted to check on its status
[03:08] <natefinch> cherylj: I don't know when beta 1 is, but I know feature freeze is the 16th and that's what I was talking about as being questionable.
[03:08] <cherylj> natefinch: oic
[03:08] <natefinch> cherylj: 12 days from now comes up pretty quick :)  But like I said... I really want it in, so I'll do my best.
[03:08] <cherylj> thanks, natefinch
[03:12]  * thumper afk for a bit
[03:30]  * thumper back
[03:30] <thumper> I'm  merging master into the model-migration
[03:30]  * thumper sighs
[03:30] <cherylj> that bad?
[03:30] <thumper> only minor conflicts so far
[03:30] <thumper> but I know some other changes are needed in my code
[03:30] <thumper> so off to rename all my bits
[03:33]  * menn0 imagine thumper renaming his body parts
[03:33] <cherylj> I didn't want to go there...
[03:34] <menn0> so is InitiateModelMigrationResults too unwieldy as a type name? :)
[03:36] <menn0> thumper: ^^
[03:37] <thumper> menn0: nah
[03:44] <menn0> thumper: good. review for http://reviews.vapour.ws/r/3745/diff/ then please
[03:44] <thumper> not a shitty review?
[03:47] <natefinch> menn0: IMO, the model part there is probably redundant.  I presume there's no other migrations going on in that section of code.. I'd probably call it InitMigrationResults.
[03:48] <natefinch> menn0: oh, I guess if it's in the API, maybe you do need the model part.
[03:48] <menn0> natefinch: yeah, it's based on the API name that returns that result
[03:48] <menn0> the API is InitiateModelMigration
[03:49] <natefinch> menn0: yeah, figured that out after I typed it.  I wish our API types were partitioned into a package per facade, so we didn't need java style naming.
[03:50] <menn0> natefinch: yep that would be much nicer. instead of having the one params package with all of them thrown together.
[04:02] <menn0> thumper: and another one http://reviews.vapour.ws/r/3746/
[04:23] <davecheney> https://github.com/juju/juju/pull/4305
[04:24] <natefinch> cherylj: is there a task for someone to rename our various -unstable repos to no longer be called unstable for 2.0?
[04:25] <davecheney> that's a very unwise question to be asking
[04:25] <cherylj> natefinch: not that I'm aware of
[04:25] <natefinch> cherylj: I figured
[04:26] <natefinch> davecheney: heh, well, I'd be happy to fix it if I had time.  I never liked using -unstable anyway.
[04:27] <wallyworld> thumper: so your branch landed, i can do that rename?
[04:27] <wallyworld> i se eone landing, was that the one
[04:32] <mup> Bug #1287718 changed: jujud on machine 0 stops listening to port 17070/tcp WSS api <cts-cloud-review> <mongodb> <state-server> <sts> <juju-core:Expired> <https://launchpad.net/bugs/1287718>
[04:32] <mup> Bug #1469193 changed: juju selects wrong address for API <kvm> <local-provider> <lxc> <network> <sts> <juju-core:Expired> <https://launchpad.net/bugs/1469193>
[04:43] <davecheney> cherylj: 1287718, that's what I am seeing trying to replicate an issue with the manual provider
[05:02] <davecheney> why does juju use websockets when neither the server, or the client are a browser ?
[05:06] <natefinch> davecheney: just in case?
[05:06] <natefinch> davecheney: to be fair, one of the clients is a browser
[05:08] <davecheney> so we have our own encoding scheme over net/rpc over websockets over https
[05:08] <davecheney> not simple
[05:09] <natefinch> Juju: It's Not Simple™
[05:10] <davecheney> what a catchcry
[05:26] <wallyworld> axw: a one pager! fixes the recently migrated Service facade method names http://reviews.vapour.ws/r/3749
[05:27] <axw> wallyworld: looking
[05:27] <wallyworld> am looking at yours too
[05:28] <wallyworld> gawd, that sounds suss
[05:28] <axw> mine's bigger?
[05:28] <wallyworld> no mine is! more files
[05:33] <axw> wallyworld: LGTM, just one changed method name in a comment needs to be capitalised
[05:33] <wallyworld> ty
[05:42] <wallyworld> axw: looks great
[05:50] <axw> wallyworld: thanks
[05:50] <wallyworld> np
[05:51] <axw> wallyworld: what are you looking for in a feature test? I think the TestRegister checks everything we can at the moment
[05:52] <wallyworld> axw: actualy, yeah, i was thinking add-user, then register, then login but we're not there yet
[05:52] <wallyworld> ignore me
[05:52] <axw> wallyworld: yep. I'll add one when I've finished the register branch
[05:53] <wallyworld> ta, sorry premature test request :-)
[05:54] <thumper> wallyworld: yes, that was the one
[05:55] <wallyworld> thumper: ta, i'm about to land my update, got to update romulus dependency first
[09:28] <voidspace> dimitern: hey, did you land my branch on controller-space-config?
[09:28] <voidspace> dimitern: functional-jes was still failing last night
[09:29] <dimitern> voidspace, yes, your last change is in maas-spaces-controller-config
[09:30] <voidspace> hmmm
[09:30] <dimitern> voidspace, and it seems the spaces discovery still blocked during create-model, but not during bootstrap
[09:30] <voidspace> dimitern: ok, needs more looking at then
[09:33] <voidspace> dimitern: where are you seeing that, it doesn't seem to be in the logs for machine-0/1/2
[09:37] <dimitern> voidspace, I'm looking at http://reports.vapour.ws/releases/3574/job/functional-jes/attempt/626
[09:37] <voidspace> dimitern: me too
[09:37] <dimitern> voidspace, don't you see the artifacts section with all the logs in the beginning?
[09:38] <voidspace> dimitern: yes and I looked through the logs of machine-0, machine-1 and machine-2
[09:38] <voidspace> dimitern: couldn't see mention of space discovery
[09:38] <voidspace> I may have just missed it
[09:38] <voidspace> also not in the console output
[09:39] <dimitern> voidspace, true.. let me see where I found that..
[09:39] <voidspace> nor logsink
[09:41] <dimitern> voidspace, there http://data.vapour.ws/juju-ci/products/version-3574/functional-jes/build-625/consoleText
[09:41] <dimitern> voidspace, and in the other later attempt: http://data.vapour.ws/juju-ci/products/version-3574/functional-jes/build-625/consoleText
[09:41] <dimitern> voidspace, but not in the last attempt 626
[09:47] <voidspace> the first two links are the same
[09:47] <voidspace> dimitern: so it *isn't* a problem in the most recent build
[09:47] <voidspace> which is what I would hope
[09:48] <dimitern> voidspace, sorry - 625 and 624 are the jobs I meant
[09:49] <voidspace> hmmm... 625 was today
[09:49] <dimitern> voidspace, all 3 runs on functional-jes though - 624, 625, and 626, were all with the most recent build with your fix
[09:49] <voidspace> oh no it wasn't
[09:49] <voidspace> (necessarily)
[09:49] <voidspace> dimitern: really, odd
[09:53] <voidspace> dimitern: I'm seeing run 3574 on Friday which has build 626 (not discovery)
[09:53] <voidspace> dimitern: run 3569, on Thursday and before my fix, build 620
[09:56] <voidspace> dimitern: ah, now I see them
[09:56] <voidspace> dimitern: so if a test fails with a build it does multiple attempts?
[09:56] <dimitern> voidspace, yeah, I guess it depends on how the job is set up
[09:57] <voidspace> dimitern: ok, I'm looking into it - will see if I can reproduce it
[10:00] <dooferlad> dimitern: hangout?
[10:01] <dimitern> dooferlad, omw
[10:35] <rick_h__> jam: ping
[10:37] <voidspace> rick_h__: jam isn't usually around on Friday
[10:38] <rick_h__> voidspace: ah, gotcha
[10:38] <rick_h__> thanks
[11:12] <voidspace> dimitern: frobware: dooferlad: my connection died
[11:51] <frobware> dimitern, if we're changing how bootstrap space discovery we won't need this (http://pastebin.ubuntu.com/14886630/) anymore. Preserved should we need it.
[11:54] <dimitern> frobware, yeah, but that's one heuristic less to deal with :)
[12:20] <perrito666> morning
[12:22] <anastasiamac_> perrito666: \o/
[12:38] <voidspace> dimitern: frobware: so, on a naive test I can bootstrap to joyent and create a new model
[12:38] <voidspace> dimitern: frobware: digging into what this test does that might be different
[12:40] <voidspace> ah, it does a bunch of deployments
[12:41] <voidspace> and it's the deploy that fails with the error
[12:41] <voidspace> that obviously involves starting machine agents
[12:42] <voidspace> it shouldn't involve starting a discovery worker though!
[12:42] <dimitern> voidspace, the discovery worker is in the MA
[12:42] <voidspace> dimitern: yes, but only with job ManageEnviron
[12:43] <dimitern> voidspace, yeah, which is true for the bootstrap model, but also for the created model later
[12:43] <voidspace> dimitern: but the failure is not during create model, but during deployment afterwards
[12:43] <dimitern> voidspace, not sure if we run jujud per model or 1 instance that handles both
[12:43] <voidspace> dimitern: the space discovery error is only a warning
[12:43] <voidspace> dimitern: the error  is:
[12:43] <voidspace> ERROR cmd supercommand.go:448 model "functional-jes-env2" not found
[12:43] <dimitern> voidspace, yeah, that's troubling
[12:44] <voidspace> ERROR Exception while environment "functional-jes-env2" active
[12:44] <voidspace> investigating
[12:44] <dimitern> voidspace, it might be that the model is not yet created when we try to see if it supports networking?
[12:44] <dimitern> unlikely, but..
[12:45] <voidspace> with the fix in place an error there should still terminate space discovery
[12:45] <voidspace> but that *could* set a new channel on the agent, if it keeps getting bounced
[12:45] <voidspace> I can add a check not to set a new channel if there's one there
[12:45] <dimitern> voidspace, I was thinking along these lines as well
[12:45] <dimitern> but anyway..
[12:45] <voidspace> so, if we're bounced we don't keep waiting
[12:45]  * dimitern steps out for ~1h btw
[12:46] <voidspace> I think I can test that too
[12:46] <voidspace> basically test for race conditions like that
[12:46] <voidspace> although then we should protect the channel with a mutex
[13:06] <perrito666> I wonder why psus dont come with sensors I can monitor from the pc, it would be very useful in a country such as this that has high temperatures
[14:32] <cherylj> tych0: you around?
[14:34] <mup> Bug #1542336 opened: cannot find package "github.com/gosexy/gettext" <blocker> <ci> <packaging> <regression> <test-failure> <unit-tests> <wily> <xenial> <juju-core:Triaged> <https://launchpad.net/bugs/1542336>
[14:37] <mup> Bug #1542336 changed: cannot find package "github.com/gosexy/gettext" <blocker> <ci> <packaging> <regression> <test-failure> <unit-tests> <wily> <xenial> <juju-core:Triaged by tycho-s> <https://launchpad.net/bugs/1542336>
[14:40] <mup> Bug #1542336 opened: cannot find package "github.com/gosexy/gettext" <blocker> <ci> <packaging> <regression> <test-failure> <unit-tests> <wily> <xenial> <juju-core:Triaged> <https://launchpad.net/bugs/1542336>
[14:43]  * dimitern is back
[14:46] <mup> Bug #1542336 changed: cannot find package "github.com/gosexy/gettext" <blocker> <ci> <packaging> <regression> <test-failure> <unit-tests> <wily> <xenial> <juju-core:Triaged by tycho-s> <https://launchpad.net/bugs/1542336>
[14:49] <mup> Bug #1542336 opened: cannot find package "github.com/gosexy/gettext" <blocker> <ci> <packaging> <regression> <test-failure> <unit-tests> <wily> <xenial> <juju-core:Triaged> <https://launchpad.net/bugs/1542336>
[14:52] <cherylj> can someone review my patch to revert the change that broke master?  http://reviews.vapour.ws/r/3752/
[14:53] <voidspace> cherylj: LGTM
[14:53] <cherylj> thanks, voidspace !
[14:53] <dimitern> cherylj, +1 from me as well
[14:54] <cherylj> tyvm, dimitern!
[14:55] <voidspace> dimitern: so, I have a fix and a test that proves it
[14:55] <voidspace> dimitern: just factoring out code common to the two agent tests that test discovery
[14:55] <dimitern> voidspace, great! what was the crux of the issue?
[14:55] <voidspace> dimitern: restarting the worker no longer replaces the discovery channel
[14:56] <voidspace> dimitern: well, that's a fix for what I *think* the error must be
[14:56] <voidspace> I didn't reproduce as such
[14:56] <voidspace> ...
[14:56] <voidspace> but given the new code it *must* be the problem
[14:56] <voidspace> if discovery is indeed the cause of the error
[14:56] <voidspace> dimitern: when this is up for review I will put more time back into getting the deploy to work locally so i can attempt to repro and see if this fixes it
[14:57] <voidspace> it's a good fix *anyway*
[14:57] <voidspace> as the test proves
[14:57] <dimitern> voidspace, right, so was it due to jingling and bouncing workers?
[14:57] <voidspace> dimitern: if you bounce the worker and restart, the closed channel was replaced with a new one
[14:57] <voidspace> dimitern: so repeatedly bouncing the worker could cause the api to be permanently blocked
[14:58] <dimitern> voidspace, nasty :/
[14:58] <voidspace> dimitern: I protected access to the channel with a mutex
[14:58] <voidspace> probably not strictly necessary as it's only set in one place - but at least it's goroutine safe now
[14:58] <dimitern> voidspace, I doubt that'll help though
[14:59] <dimitern> voidspace, what does the mutex aim to prevent?
[14:59] <voidspace> dimitern: *plus* I don't set it if it's already set
[14:59] <voidspace> that's the key thing
[14:59] <voidspace> the mutex makes *that* goroutine safe
[14:59] <dimitern> voidspace, ah! that sounds like closer to it
[14:59] <voidspace> let me show you
[14:59] <voidspace> the test makes it clear what I'm protecting against
[15:00] <dimitern> ok, I'll have a look when you're readt
[15:00] <dimitern> ready even
[15:13] <voidspace> dimitern: http://reviews.vapour.ws/r/3753/
[15:13] <voidspace> I have to go pick up my daughter from school
[15:14] <frobware> dimitern, voidspace, dooferlad: did you see the latest maas-spaces CI run?
[15:14] <voidspace> dimitern: I have checked - the new test fails with the old code, the API remains blocked
[15:14] <voidspace> frobware: good?
[15:14] <frobware> dimitern, voidspace, dooferlad: foreget it -- it's not final.
[15:14] <voidspace> hah
[15:15] <voidspace> frobware: http://reviews.vapour.ws/r/3753/
[15:15] <voidspace> back soon
[15:20] <tych0> cherylj: hello
[15:21] <cherylj> hey tych0, were you working on lxd to not pull in github.com/gosexy/gettext?
[15:21] <cherylj> it seems jam merged your PR into master last night
[15:21] <tych0> cherylj: yeah, it's done, just didn't get merged until this morning
[15:21] <tych0> i'll send a juju patch shortly
[15:21] <cherylj> tych0: cool, thanks. Remember to make it against your feature branch.
[15:22] <tych0> even thought the other stuff landed in master?
[15:22] <tych0> or did that get reverted?
[15:22] <cherylj> I reverted the merge of 4131 from master
[15:22] <tych0> ok
[15:23] <cherylj> tych0: you may also want to merge master into your feature branch as a lot of stuff landed yesterday.
[15:23] <tych0> ah, yes
[15:23] <tych0> lots of conflicts.
[15:24] <tych0> sweet
[15:24] <cherylj> :(
[15:31] <tych0> cherylj: https://github.com/juju/juju/pull/4313
[15:41] <ericsnow> natefinch: oh, and were you going to help tych0?
[15:42] <natefinch> ericsnow: yes, but it sounds like everything is pretty much under control
[15:42] <ericsnow> tych0: ^^^ ?
[15:43] <tych0> ericsnow: i'm not sure, cherylj is probably the person who knows best. i don't fully understand the juju process here :)
[15:43] <ericsnow> tych0: no problem :)
[15:43] <natefinch> cherylj, tych0: let me know if you need any help
[15:44] <tych0> natefinch: cool, thanks
[15:45] <ericsnow> tych0: for that PR (#4313), it looks like you included the merge from master...the actual change isn't that big or invasive, is it?
[15:45] <tych0> ericsnow: yes, cherylj just told me to merge from master above
[15:45] <tych0> the actual change is two lines
[15:45] <perrito666> does anyone here knows a lot about multiwatcher?
[15:45] <ericsnow> tych0: if so, it may make sense to merge master into your feature branch in a separate PR, land it, and then rebase your first PR
[15:46] <ericsnow> tych0: otherwise your change gets lost in the noise on reviewboard
[15:46] <tych0> lol, ok.
[15:46] <ericsnow> tych0: cool, thanks :)
[15:46] <tych0> well
[15:47] <tych0> unfortunately there's not a good way to keep this conflict resolution :(
[15:48] <ericsnow> tych0: you could use git rebase -i to move your specific changes out of the way to after the merge
[15:48] <ericsnow> tych0: maybe?
[15:48] <tych0> yeah, that doesn't work so hot
[15:48] <ericsnow> tych0: :(
[15:48] <tych0> i'll just redo the merge conflicts
[15:49] <ericsnow> tych0: ugh, I've been in that situation before too :(
[15:50] <tych0> ericsnow: https://github.com/juju/juju/pull/4314
[15:54] <tych0> let me know when you have that one merged
[15:54] <tych0> i have the one that goes on top of it as well. i think github is usually smart enough to resolve that in its UI, no idea about reviewboard, so i'll wait to make the PR until it's done so as not to confuse things further
[16:37] <dimitern> voidspace, hey, sorry - I was in a call, but I've looked at your PR and it LGTM
[16:42] <cherylj> dimitern, frobware_ have you guys taken a look at bug 1542206?
[16:42] <mup> Bug #1542206: space discovery still in progress <ci> <juju-core:Invalid> <juju-core maas-spaces:New> <https://launchpad.net/bugs/1542206>
[16:42] <cherylj> Seems to be weird behavior when creating hosted models
[16:42] <cherylj> with space discovery
[16:43] <dimitern> cherylj, yeah, and voidspace has a fix for it, which I've just reviewed - it should fix it
[16:43] <cherylj> dimitern: awesome, thanks!
[16:49] <mup> Bug #1287718 opened: jujud on machine 0 stops listening to port 17070/tcp WSS api <cts-cloud-review> <mongodb> <state-server> <sts> <juju-core:Confirmed> <https://launchpad.net/bugs/1287718>
[17:14] <marcoceppi_> I need help, questions about storage
[17:14] <marcoceppi_> is filesystem supported?
[17:17] <marcoceppi_> perrito666 cherylj natefinch ^?
[17:22] <cherylj> marcoceppi: you mean like adding storage as a new filesystem on a host?
[17:23] <cherylj> I'm not sure what you're asking (I'm so not familiar with the storage piece)
[17:42] <marcoceppi> cherylj: I'm going to mail the list
[17:48] <frobware> dimitern, voidspace: ping
[17:48] <frobware> dimitern, voidspace, dooferlad: IRC has been busted for since around 2pm. :(  anything to be aware of?
[17:55] <dimitern> frobware, pong
[17:56] <dimitern> frobware, I had a chat with fwereade_ about what we discussed and I'll compile a list of use cases updated with some nice suggestions from him to share and discuss the approach on the ML
[17:57] <frobware> dimitern, ack
[17:57] <voidspace> frobware: pong
[17:57] <voidspace> dimitern: thanks
[17:57] <frobware> voidspace, can we / should we cherry-pick your change into dimiter's branch so we get another test of -jes tonight?
[17:57] <voidspace> frobware: yes
[17:58] <dimitern> frobware, and I decided to do something useful as today is not going smoothly as I hoped :/, so I'm preparing a merge of master into maas-spaces (sans my other branch) to get at least closer to upstream, and see what is the simplest thing to do to fix the remaining issues
[17:58] <frobware> dimitern, ahhh. we should co-ordinate. I was doing the same.
[17:59] <frobware> dimitern, any objections if I cherry-pick voidspace's change into your branch and push to upstream?
[17:59] <dimitern> frobware, ah :) ok, well - I'm about 4/5 done, we can compare later?
[17:59] <frobware> dimitern, there's stuff landing in master. depends when you and I started.
[17:59] <dimitern> frobware, is it worth pursuing that branch anymore?
[18:00] <frobware> dimitern, nope. can we merge that back into maas-spaces before tonights CI run?
[18:00] <dimitern> frobware, started when tip was at 1cd7ac8c and I'm about 4/5 done with it
[18:00] <frobware> dimitern, though the timing of the CI run is based on any new changes in that branch
[18:00] <dimitern> frobware, I doubt we need most of it - at least the contoller-space handling stuff
[18:01] <frobware> dimitern, I really wanted to be back on maas-spaces -- reduces confusion.
[18:01] <dimitern> frobware, well, that's also true for maas-spaces itself, so why not get up-to-date with master, and then see what to cherry pick from my other branch?
[18:02] <frobware> dimitern, there's the manual deployer tests which are only fixed in your branch. correct?
[18:02] <dimitern> frobware, well, in master as well
[18:02] <frobware> dimitern, getting confused. what would you like to do? which branch should be tested tonight (over the w/end)?
[18:02] <dimitern> frobware, I'm not saying all of that other branch of mine needs to go, some things are still useful, but the approach around controller spaces will most likely change
[18:03] <dimitern> frobware, sorry, let me restate: I'd prefer to get maas-spaces up-to-date with latest master and get a CI run of maas-spaces with it
[18:04] <frobware> dimitern, voidspace: I'm thinking the simplest thing for today is to cherry-pick the additional -jes fix
[18:04] <frobware> dimitern, voidspace: without additional churn we can validate whether we're better/worse
[18:04] <voidspace> yep
[18:04] <voidspace> dimitern: is there any way your work can be parallelised?
[18:05] <dimitern> voidspace, it is, I just need to write down a rough set of steps
[18:05] <dimitern> voidspace, considering the discussions etc.
[18:06] <voidspace> ok
[18:06] <voidspace> thanks
[18:06] <frobware> dimitern, voidspace: so let's take the -jes change into maas-spaces-controller-space-config and we can also move forward with master -> maas-spaces
[18:06] <dimitern> frobware, voidspace, but I believe we can get maas-spaces (with master merged in it) to a bless with a handful of changes only
[18:06] <voidspace> dimitern: great
[18:06] <voidspace> dimitern: that should be priority #1
[18:07] <dimitern> voidspace, frobware, agreed - those changes will bring back the compatible behavior (no controller-space to care about, no default space in bindings to cause issues)
[18:09] <dimitern> so we can then do those (rather invasive changes I must say) in a simpler step-wise manner
[18:09] <dimitern> frobware, ok, so as soon as I finish with merge conflicts will c-p the jes fix into the -config branch and push
[19:07] <voidspace> right, EOW
[19:07] <voidspace> have a good weekend everyone
[19:33] <tych0> cherylj: ericsnow: i'm not sure i understand the failure here: http://juju-ci.vapour.ws:8080/job/github-merge-juju/6249/console
[19:34] <tych0> it builds fine on my machine, but sine i added that directory in the previous feature branch, perhaps i need to add it to some other listing somewhere so that whatever is building the tarball picks it up?
[19:36] <natefinch> tych0: the landing bot runs go 1.2, which means we have to mark all LXD code as // +build go1.3  so it only builds with 1.3 or higher
[19:37] <ericsnow> tych0: perhaps "container/lxd/instance.go" imports "github.com/juju/juju/tools/lxdclient" and the lxdclient package isn't in the repo?
[19:37] <natefinch> tych0: looks like you need to add that builds tag to container/lxd/instance.go
[19:37] <ericsnow> tych0, natefinch: or that :)
[19:37] <tych0> oh, hm
[19:37] <tych0> cool, thanks guys
[19:37] <ericsnow> natefinch: nice catch
[19:38] <natefinch> ericsnow: went through all this rigamarole before, so I remember getting that dumb error
[19:38] <tych0> :)
[19:38] <natefinch> I wish the error were just a little bit smarter and would tell you "all go code in the directory is excluded by build tags"
[19:39] <natefinch> but that probably breaks layers
[19:51] <ericsnow> natefinch: I have my "drop ModelResource" patch up: http://reviews.vapour.ws/r/3759/
[19:51] <natefinch> ericsnow: cool
[20:00] <perrito666> omg that merge was painful
[20:18] <tych0> cherylj: bah. it looks like i forgot one. one more time please? sorry about that :(
[20:18]  * tych0 can't computer today
[20:20] <cherylj> tych0: np :)  You can also use $$fixes-1542336$$
[20:21] <cherylj> but I am also going to remove the blocker tag on that bug
[20:21] <cherylj> so thank you for reminding me :)
[20:21] <tych0> cherylj: oh, can i merge my own stuff?
[20:22] <cherylj> tych0: yeah, that one is going into your feature branch, so merge away (once you get a ship it, generally)
[20:22] <cherylj> it was blocking because of that bug, which was a critical blocker
[20:26] <natefinch> ericsnow: added the upload code - http://reviews.vapour.ws/r/3699/
[20:26] <natefinch> gotta go snowblow before it gets dark.
[20:26] <ericsnow> natefinch: k
[21:02] <perrito666> aghhh I spent half a day merging master into my branch and now is not compatible, how can that possibly be?????
[21:29] <mup> Bug #1542488 opened: 'juju storage pool create' has no useful help and a generic error message  <docteam> <juju-core:New> <https://launchpad.net/bugs/1542488>
[22:26] <mup> Bug #1542510 opened: aws-deploy-trusty-amd64-on-xenial-amd64 hooks not firing <ci> <ec2-provider> <hooks> <regression> <wily> <xenial> <juju-core:Triaged> <https://launchpad.net/bugs/1542510>
[22:56] <mup> Bug #1542518 opened: TestListAllEmpty <ci> <go1.5> <intermittent-failure> <unit-tests> <wily> <xenial> <juju-core:Triaged> <https://launchpad.net/bugs/1542518>
[22:56] <mup> Bug #1542520 opened: TestDestroyRemovesContainers fails on centos <centos> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1542520>