[00:07] <menn0> davecheney: 62097950d4677fb8ef6c9c83e3d7555eebf2c653 means that the tests in cmd/jujud/agent now don't run at all
[00:07] <davecheney> oops
[00:07] <davecheney> will fix
[00:07] <menn0> davecheney: package_test.go is in the agent_test package but all the tests are in the "agent" package
[00:08] <menn0> davecheney: thanks
[00:08] <davecheney> well, that was unexpected
[00:08] <davecheney> i'll fix asap
[00:09] <menn0> davecheney: i'm currently ripping out the upgrade_test.go tests, which is why I noticed
[00:10] <davecheney> menn0: hmm
[00:10] <davecheney> tests run on my machine
[00:10] <davecheney> the way gocheck tests work
[00:10] <davecheney> i think i got away with it
[00:10] <davecheney> but it's inconsistent and i'll fix it
[00:11] <menn0> davecheney: if I run "go test ./cmd/jujud/agent -gocheck.f UpgradeSuite" the UpgradeSuite tests don't run but they used to before
[00:12] <davecheney> got it
[00:12] <davecheney> fix coming asap
[00:12] <menn0> thanks
[00:14] <davecheney> menn0: https://github.com/juju/juju/pull/3828
[00:17] <menn0> davecheney: ship it, with commentary
[00:18] <davecheney> lucky(~/src/github.com/juju/juju/cmd/jujud/agent) % go test -v
[00:18] <davecheney> [00:18] <davecheney> OK: 107 passed, 3 skipped
[00:18] <davecheney> --- PASS: TestPackage (175.62s)
[00:18] <davecheney> PASS
[00:18] <davecheney> ok      github.com/juju/juju/cmd/jujud/agent    175.672s
[00:18] <davecheney> lucky(~/src/github.com/juju/juju/cmd/jujud/agent) % go test -v
[00:19] <davecheney> [00:19] <davecheney> OK: 107 passed, 3 skipped
[00:19] <davecheney> --- PASS: TestPackage (173.66s)
[00:19] <davecheney> PASS
[00:19] <davecheney> ok      github.com/juju/juju/cmd/jujud/agent    173.710s
[00:19] <davecheney> before vs after
[00:19] <davecheney> not seeing a difference in test runs
[00:19] <davecheney> how many tests do you see ?
[00:21] <davecheney> menn0: ^^
[00:22] <menn0> 0
[00:23] <menn0> but I wasn't in the same directory
[00:23] <davecheney> ok
[00:23] <menn0> I did "go test -v ./cmd/jujud/agent -gocheck.v"
[00:25] <davecheney> i don't really understand how that matters
[00:25] <davecheney> gocheck is a singleton
[00:25] <davecheney> so all the tests are registered during init
[00:25] <davecheney> then something has to hook them up
[00:25] <davecheney> anyway
[00:25] <davecheney> it's fixed
[00:26] <menn0> NFI
[00:26] <menn0> davecheney: thank you
[00:26] <davecheney> zero ducks
[00:52] <davecheney> axw: you're on call review today? https://github.com/juju/juju/pull/3829
[00:52] <axw> davecheney: yup. looking
[00:54] <axw> davecheney: LGTM
[01:06] <thumper> bah humbug
[01:06]  * thumper wants sinzui
[01:06] <davecheney> don't we all
[01:15] <thumper> uggerbay
[01:15] <thumper> I've worked out why my call fails...
[01:15] <thumper> kinda
[01:16] <thumper> what I'm not sure about is how it passed before
[01:20] <davecheney> axw: https://github.com/juju/juju/pull/3830
[02:16] <thumper> oh FFS
[02:17] <thumper> wallyworld: got a minute to talk about a bug?
[02:17] <wallyworld> sure
[02:17] <thumper> 1:1 hangout
[02:41] <thumper> davecheney, menn0, mwhudson, waigani: going to have to skip the standup tomorrow morning as I'm ferrying kids down to the university for a school trip
[02:41] <thumper> will work from the library down there for the moring
[02:41] <thumper> morning
[02:46] <davecheney> mkay
[02:46] <menn0> thumper: ok cool
[02:47] <waigani> thumper: I might see you there if marsh centre is closed
[02:48] <thumper> waigani: kk
[02:52] <mup> Bug #1519995 changed: Upgrades from 1.20.11 to 1.25.2 fail because of status <blocker> <ci> <regression> <status> <upgrade-juju> <juju-core:Invalid> <juju-core 1.25:In Progress by thumper> <https://launchpad.net/bugs/1519995>
[02:55] <mup> Bug #1519995 opened: Upgrades from 1.20.11 to 1.25.2 fail because of status <blocker> <ci> <regression> <status> <upgrade-juju> <juju-core:Invalid> <juju-core 1.25:In Progress by thumper> <https://launchpad.net/bugs/1519995>
[02:58] <mup> Bug #1519995 changed: Upgrades from 1.20.11 to 1.25.2 fail because of status <blocker> <ci> <regression> <status> <upgrade-juju> <juju-core:Invalid> <juju-core 1.25:In Progress by thumper> <https://launchpad.net/bugs/1519995>
[03:01] <mup> Bug #1519403 changed: 1.24 upgrade does not set environ-uuid <juju-core:Won't Fix by thumper> <https://launchpad.net/bugs/1519403>
[03:04] <mup> Bug #1519403 opened: 1.24 upgrade does not set environ-uuid <juju-core:Won't Fix by thumper> <https://launchpad.net/bugs/1519403>
[03:07] <mup> Bug #1519403 changed: 1.24 upgrade does not set environ-uuid <juju-core:Won't Fix by thumper> <https://launchpad.net/bugs/1519403>
[03:10] <mup> Bug #1519403 opened: 1.24 upgrade does not set environ-uuid <juju-core:Won't Fix by thumper> <https://launchpad.net/bugs/1519403>
[03:12] <thumper> wow mup is confused
[03:16] <mup> Bug #1519403 changed: 1.24 upgrade does not set environ-uuid <juju-core:Won't Fix by thumper> <https://launchpad.net/bugs/1519403>
[03:19] <mup> Bug #1519403 opened: 1.24 upgrade does not set environ-uuid <juju-core:Won't Fix by thumper> <https://launchpad.net/bugs/1519403>
[03:22] <mup> Bug #1519403 changed: 1.24 upgrade does not set environ-uuid <juju-core:Won't Fix by thumper> <https://launchpad.net/bugs/1519403>
[03:47] <axw> thumper: do you think it would be reasonable to reject upgrading to 2.0 if the env has no UUID?
[03:48] <axw> thumper: ... or is there always a UUID now?
[03:48] <thumper> axw: ummm.... can you explain more?
[03:48] <thumper> you mean cached locally?
[03:48] <axw> thumper: in environs/config there's a "ok bool" on UUID that says if the config has a UUID or not
[03:48] <thumper> ah...
[03:48] <thumper> ok
[03:48] <axw> thumper: is it only the client that might not have one?
[03:48] <thumper> config that comes from the environments.yaml will not have a UUID
[03:49] <thumper> it is expected that all servers will have one
[03:49] <thumper> as of 1.20, it is cached in response to API connections
[03:50] <thumper> as of 1.24 or 1.25, it is written to the cache as part of bootstrap rather than the first connect after bootstrap
[03:50] <axw> thumper: ok, cool. it would be nice if 99% of callers didn't have to use the (UUID, bool) call
[03:50] <thumper> agreed
[03:50] <thumper> as of 2.0, there is no environments.yaml
[03:50] <axw> leading to confusing me :)
[03:51] <thumper> so this should no longer be an issue
[03:51] <thumper> more cleanup
[03:51] <axw> thumper: excellent
[03:51] <axw> thanks
[03:51] <thumper> np
[03:57] <wallyworld> axw: we do need to pass the series to the backend. if the user overrides the series, we can't put that into the charm url and just pass that as we do now, because the modified charm url will not be resolvable in the store as it doesn't exist
[03:57] <wallyworld> so we need to record the true charm url and user specified series separately
[03:58] <wallyworld> this applies when the user forces a series not supported by the charm
[03:58] <axw> wallyworld: why do we resolve the URL in the backend?
[03:58] <axw> wallyworld: or try to fetch it on the backend
[03:58] <wallyworld> i'd have to check the code again but we do
[03:58] <wallyworld> we call repo.Resolve() server side
[03:59] <wallyworld> to add the charm to state
[03:59] <wallyworld> from emmory
[03:59] <axw> wallyworld: ok
[03:59] <wallyworld> axw: i had the same thought as you originally but ran into these issues
[03:59] <wallyworld> when i tested live with series override
[04:09] <axw> wallyworld: what exactly is hte issue with revision in the URL? you've taken out the revision in the bundle tests, but surely we still need to be able to specify revision somehow
[04:10] <wallyworld> axw: you can only specify revision if a series is in the url, so name-42 is not allowed but trusty/name-42 is. these changes are upstream charmstore changes
[04:10] <wallyworld> since in a multi-series world they reckon name-42 is ambiguous
[04:10] <wallyworld> not sure i agree tbh
[04:11] <axw> wallyworld: yeah, likewise. in any case, you removed revision from trusty/wordpress and wordpress both
[04:11] <wallyworld> works either way, i can add it back to the trusty case
[04:11] <wallyworld> must have been a typo
[04:30] <axw> wallyworld: for a new charm with series in metadata, can you resolve a URL with series if the series is supported?
[04:30] <wallyworld> yes
[04:31] <axw> wallyworld: ok, good. so it's just unsupported series that you can't resolve. makes sense
[04:31] <wallyworld> yep
[04:49] <davecheney> wallyworld: axw mgz can someone trigger a run of this job please http://juju-ci.vapour.ws:8080/job/run-unit-tests-race/
[04:49] <axw> davecheney: sure, if I can work out how
[04:50] <wallyworld> davecheney: what revision_build?
[04:50] <axw> was about to ask
[04:51] <wallyworld> i'm not sure what to type in there
[04:51] <davecheney> wallyworld: axw
[04:51] <davecheney> i have no idea
[04:51] <davecheney> its supposed to be automatic
[04:51] <davecheney> i've tried feeding it a revisoin and it barfs
[04:51] <axw> davecheney: I think it's dependent on the build-revision job, which hasn't run for ToT yet
[04:51] <davecheney> there is some extrenal process that kicks off the job
[04:51] <davecheney> but i don't know what it is
[04:51] <davecheney> tot ?
[04:52] <axw> top of tree
[04:52] <axw> sorry
[04:52] <davecheney> worst alias for master, ever
[04:52] <axw> yes, that one
[04:52] <axw> working in LLVM has messed with my brain
[04:53] <davecheney> seek professional counciling
[04:53] <davecheney> is it possible to bump the build-revision job ?
[04:53] <davecheney> oh, my other gripe about the race job
[04:53] <davecheney> if you stop the job
[04:54] <davecheney> becuase it's testing a branch that you know won't pass
[04:54] <davecheney> something resubmits that job
[04:54] <axw> davecheney: trying a new build-revision now
[04:54] <davecheney> thanks!
[05:03] <axw> davecheney: race job is running now
[06:14] <wallyworld> axw:  lts and model series are validated now
[06:15] <axw> wallyworld: ta, looking
[06:38] <wallyworld> thanks axw, have to wait to land after charmstore is cut over
[08:13] <mup> Bug #1519527 changed: MAAS 1.9b2+ with juju 1.25.1:  lxc units all have the same IP address <openstack> <sts> <uosci> <MAAS:Triaged by mpontillo> <MAAS 1.9:Triaged by mpontillo> <MAAS trunk:Triaged by mpontillo> <https://launchpad.net/bugs/1519527>
[09:16] <frobware> dimitern_, ping
[09:17] <dimitern_> frobware, pong
[09:19] <voidspace> dooferlad: just to let you know I'm blocked on the fixed range stuff
[09:20] <dooferlad> voidspace: OK, I should be on that soon.
[09:20] <dooferlad> voidspace: just juggling multiple tasks :-(
[10:00] <dimitern_> voidspace, jam, fwereade, standup?
[10:03] <voidspace> dimitern: omw
[10:11] <dooferlad> frobware: if you are trying to hangout in Firefox and having sound problems then try Chrome - I had a similar issue.
[10:36] <axw> dooferlad: sorry, I brainfarted - of course it should be possible for the keepalive goroutine to be reentered. re-reviewing that bit now
[10:36] <dooferlad> axw: no problem
[10:36] <dooferlad> axw: thanks for taking another look.
[10:37] <frobware> dooferlad, voidspace, dimitern: looks like most folks have either declined or maybe'd the OS call today - I propose we cancel unless there are some agenda items
[10:38] <dooferlad> frobware: +1
[10:39] <dimitern> frobware, sgtm
[10:46] <dooferlad> voidspace: AddFixedAddressRange change just pushed
[10:46] <dooferlad> voidspace: making SetNodeNetworkLink nicer now.
[10:52] <dooferlad> voidspace: and now the SetNodeNetworkLink update you asked for is done.
[11:45] <voidspace> dooferlad: awesome, thanks
[11:52] <voidspace> dooferlad: hmmm... setting the range with the new api doesn't *seem* to work
[11:52] <voidspace> dooferlad: digging in (will check the test code to see if I'm doing it right)
[11:52] <dooferlad> var ar AddressRange
[11:52] <dooferlad> 	ar.Start = "192.168.1.100"
[11:52] <dooferlad> 	ar.End = "192.168.1.200"
[11:52] <dooferlad> 	ar.Purpose = []string{"dynamic"}
[11:52] <dooferlad> 	ts.AddFixedAddressRange(subnet.ID, ar)
[11:52] <voidspace> dooferlad: and also I still get nil back instead of an empty array when there are no ranges
[11:52] <dooferlad> where ts is the test server
[11:52] <voidspace> dooferlad: suite.testMAASObject.TestServer
[11:53] <voidspace> oh
[11:53] <voidspace> misunderstood
[11:53] <voidspace> dooferlad: that's *exactly* what I'm doing
[11:53] <voidspace> and reserved_ip_ranges for that subnet returns null
[11:54] <dooferlad> voidspace: http://pastebin.ubuntu.com/13513986/ is my little live test server
[11:55] <dooferlad> Clearly you need to have something to run it...
[11:55] <dooferlad> http://pastebin.ubuntu.com/13513993/ for example
[11:56] <dooferlad> from http://localhost:6776/api/1.0/subnets/1/?op=reserved_ip_ranges I get http://pastebin.ubuntu.com/13513997/
[11:57] <voidspace> dooferlad: try it without the call to NewIPAddress
[11:57] <voidspace> I have a hunch
[11:57] <voidspace> still hacking my code to try it
[11:57] <dooferlad> voidspace: good hunch
[11:57] <voidspace> dooferlad: early short circuit in your code
[11:58] <voidspace> dooferlad: ok, so my code now runs
[11:58] <voidspace> dooferlad: however the tests pass, which is bad - they should fail because now the allocatable range should be different
[11:58] <voidspace> dooferlad: but that's probably a bug in my code :-)
[11:59] <voidspace> dooferlad: I can remove the adding of the extra IPAddress once that's fixed
[12:00] <dooferlad> voidspace: fixed and pushed.
[12:00] <voidspace> dooferlad: that was quick :-)
[12:03] <dooferlad> voidspace: oh hang on. Fscked up. Wrong repo.
[12:03] <dooferlad> voidspace: try now
[12:04] <voidspace> dooferlad: I have a suspicion my Purpose may be being overwritten
[12:04] <voidspace> that may not be true
[12:05] <voidspace> but I'm not seeing the range with the right Purpose yet
[12:06] <voidspace> dooferlad: hah no, typo in my code
[12:13] <voidspace> dooferlad: and it's found another bug in my code :-)
[12:14] <voidspace> I was using net.IP(..) not net.ParseIP(...)
[12:14] <dooferlad> voidspace: success!
[12:14] <voidspace> dooferlad: and now it's done
[12:14] <voidspace> dooferlad: yep, mine is ready to land
[12:14] <voidspace> dooferlad: (or at least ready for review)
[12:14] <voidspace> dooferlad: is your branch proposed?
[12:14] <dooferlad> voidspace: sweet! I will propose my branch now.
[12:16] <dooferlad> voidspace: https://code.launchpad.net/~dooferlad/gomaasapi/subnets/+merge/278342
[12:16] <voidspace> dooferlad: great
[12:18] <voidspace> dimitern: frobware: dooferlad: http://reviews.vapour.ws/r/3252/
[12:19] <dimitern> voidspace, looking
[12:32] <mup> Bug #1520199 opened: provider/maas: better handling of devices claim-sticky-ip-address failures and absence of reserved IP address <kanban-cross-team> <maas-provider> <reliability> <tech-debt> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1520199>
[12:34] <frobware> voidspace, looking
[13:04] <mattyw> is anyone around today?
[13:07] <dimitern> voidspace, reviewed
[13:30] <voidspace> dimitern: thanks
[13:31] <voidspace> dimitern: I didn't make dummy provider configurable because it's not needed as part of this PR
[13:31] <voidspace> dimitern: it can be made configurable when it's needed
[13:32] <voidspace> dimitern: and I don't think you're right about the space name transforming code
[13:32] <voidspace> dimitern: we store juju name and provider name and we need to be able to convert between them
[13:33] <voidspace> dimitern: so it shouldn't return an error
[13:34] <voidspace> dimitern: and yet again I disagree with your comments about nesting structs in tests
[13:34] <voidspace> dimitern: I think it makes them much harder to read
[13:34] <voidspace> dimitern: whitespace is good!
[13:36] <dimitern> voidspace, too much whitespace can kill you know :D
[13:36] <voidspace> dimitern: that's propaganda, there are no known cases
[13:36] <voidspace> dimitern: I think collapsing them will make them uglier
[13:37] <dimitern> voidspace, I disagree about the space names not needing to validate the juju name
[13:37] <voidspace> dimitern: I can wrap them in a helper as per  one of your other sessions
[13:37] <voidspace> dimitern: we do validate - that's what my code does
[13:37] <voidspace> dimitern: it transforms into a valid namme
[13:37] <voidspace> dimitern: so it can't return an error
[13:37] <dimitern> voidspace, how about tests where you get a space
[13:37] <dimitern> voidspace, like "#$^#$^" from maas?
[13:38] <voidspace> dimitern: what do you want to do with them?
[13:38] <voidspace> dimitern: we haven't defined that behaviour yet - so just changing the name of the existing function won't help that case
[13:38] <dimitern> voidspace, report an error, rather than pass it happily back to the apiserver and fail on import
[13:39] <voidspace> dimitern: what error?
[13:39] <voidspace> dimitern: you haven't defined the error cases
[13:39] <voidspace> dimitern: and we *still* need a transform
[13:39] <voidspace> dimitern: why shouldn't we use #$^ as a space name
[13:39] <dimitern> voidspace, ok, can we then at least replace not just " " -> "-", but also any invalid chars for a juju space name to "_"
[13:40] <dimitern> voidspace, because that would be unusable in constraints or anywhere pretty much
[13:40] <voidspace> dimitern: where are the valid characters defined
[13:41] <voidspace> dimitern: can't you use strings for constraints - no escaping?
[13:41] <dimitern> voidspace, check names.IsValidSpace
[13:41] <voidspace> (as in use quotes)
[13:41] <voidspace> dimitern: ok, thanks
[13:41] <voidspace> dimitern: I still think you need therapy for your anti-whitespace prejudice
[13:42] <voidspace> dimitern: I good course of Python should help
[13:43] <dimitern> voidspace, I like whitespace, in general, but having 3 levels of mostly empty lines (except for "{") nested moves the EOL too much for my taste when reading the code, but I guess that's just me
[13:45] <dimitern> voidspace, and the same goes for too long lines that can be wrapped nicely, instead of arbitrarily according to various editor settings
[13:45] <voidspace> dimitern: well, the edit you're suggesting would result in *massively* long and completely unreadable lines
[13:45] <voidspace> whereas as it is I can just glance at it and understand it
[13:46] <voidspace> the whitespace shows up the structure as well as making the contents easier to read (single member per line)
[13:46] <dimitern> voidspace, with the risk of wasting a couple more minutes on an issue with hardly any consequence, what's unreadable?
[13:46] <voidspace> so I genuinely disagree and don't see the problem
[13:46] <voidspace> collapsing all the whitespace into a single line
[13:46] <voidspace> as you suggest
[13:47] <voidspace>  []network.SpaceInfo{{ .. }, { .. }}
[13:47] <voidspace> that would be one line of about three hundred chars
[13:47] <voidspace> or more
[13:47] <voidspace> 800 maybe
[13:48] <voidspace> and just breaking the line, instead of using whitespace for structure, means you have to pick along it to work out where members are
[13:48] <voidspace> there's a reason that all json pretty printers use whitespace to show structure
[13:48] <voidspace> because it makes data structures easy to read
[13:48] <dimitern> voidspace, I'm not saying use v := []map[string]string{map[string]string{"foo":"bar","one":"two"},map[string]string{"bar":"baz", "four":"five"}}
[13:48] <voidspace> it makes the vertically verbose but easy to scan
[13:48] <voidspace> dimitern: well, that's *specifically* what you say
[13:49] <dimitern> voidspace, I'm not saying use v := []map[string]string{ \n \t map[string]string{"foo":"bar","one":"two"}, \n \t map[string]string{"bar":"baz", "four":"five"}}\n
[13:50] <voidspace> dimitern: I still think the SubnetInfo as single lines would be too long and harder to read
[13:50] <dimitern> voidspace, oops anyway - I meant multi-line, but some braces together on the same line, rather than having "{\n" followed by "{\n" etc
[13:50] <voidspace> dimitern: ok, I'll see how that looks
[13:50] <voidspace> that maybe fine to me and would lose a level of indentation
[13:51] <voidspace> fair enough
[13:51] <dimitern> I should've pasted a formatted snippet rather than being lazy and trying to express what I meant on one line
[13:51] <voidspace> :-)
[14:05] <mup> Bug #1520247 opened: TestContainerProvisionerStarted fails due to unknown container type: lxd <ci> <lxd> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1520247>
[14:34] <mgz_> version bump review please: http:..reviews.vapour.ws/r/3254/
[15:10] <mgz_> non-americans, rally to the cause! plzreviewplz
[15:41] <frobware> mgz_, done
[15:44] <voidspace> dimitern: I totally misunderstood!
[15:45] <voidspace> dimitern: where you said "please call IsValidSpace" I misread it as "please call the function" (i.e. rename) IsValidSpace
[15:45] <voidspace> dimitern: I agree with you :-)
[15:45] <mgz_> frobware: merci
[15:46] <mgz_> we may actually not do a 1.26-alpha3 - but I also don't want to handle a major version bump in the code right now
[15:54] <mup> Bug # changed: 1261780, 1493602, 1496652, 1499277, 1511135, 1513236, 1515736, 1517344
[15:54] <mup> Bug #1520292 opened: Upgrade from 1.21.3 -> 1.22.8 -> 1.23.3 fails with 'ERROR a hosted environment cannot have a higher version than the server environment: 1.23.3.5 > 1.22.8.1' <sts> <juju-core:New> <https://launchpad.net/bugs/1520292>
[15:59] <dimitern> voidspace, oh :)
[16:00] <voidspace> dimitern: yeah, oops...
[16:00] <dimitern> voidspace, no worries
[16:10] <mgz_> I think we need a basic-of-git session in a couple of weeks
[16:25] <voidspace> dimitern: ping
[16:26] <voidspace> dimitern: so we have to decide what to do with invalid space names
[16:26] <voidspace> dimitern: do you think that converting invalid chars to "_" is the right thing to do?
[16:27] <voidspace> dimitern: if we error out instead we either have Spaces/Subnets return an error when they encounter an invalid space name
[16:27] <voidspace> dimitern: which would mean you can't use *any* spaces or subnets if one of the space names is invalid
[16:27] <voidspace> dimitern: or we just drop the invalid space and its subnets
[16:28] <voidspace> dimitern: but if we do multiple character substitutions we risk name clashes (two different MAAS spaces being seen as the same juju space)
[16:29] <dimitern> voidspace, I think we need to be consistent
[16:30] <voidspace> dimitern: we need to eat too
[16:30] <voidspace> dimitern: but that doesn't answer the question either... ;-)
[16:30] <dimitern> voidspace, sorry, typing still..
[16:30] <voidspace> I agree we *must* be consistent
[16:30] <voidspace> I'm only teasing
[16:31] <dimitern> voidspace, if a maas space name cannot be converted to a valid juju space name, we can use a generated name or ask user to provide one
[16:31] <voidspace> dimitern: how can we ask the user to provide one?
[16:31] <dimitern> voidspace, or both (i.e. use a generated-but-valid when importing, and allow the user to rename it)
[16:32] <dimitern> voidspace, if we added them manually, that wouldn't be an issue, as we ask for a name anyway
[16:32] <voidspace> dimitern: right
[16:33] <dimitern> voidspace, but the problem is with auto importing
[16:33] <voidspace> dimitern: however, the provider has no access to state - so that will have to happen a layer above the provider
[16:33] <voidspace> dimitern: I'm almost tempted to remove SpaceName from SpaceInfo and SubnetInfo and only use ProviderSpaceId
[16:33] <dimitern> voidspace, yeah - so doesn't that imply we shouldn't try to convert them in the provider ?:)
[16:33] <voidspace> and let the conversion happen elsewhere
[16:33] <voidspace> right
[16:33] <dimitern> voidspace, that sounds better than inventing names
[16:34] <voidspace> dimitern: well, it doesn't avoid the problem - just moves it...
[16:34] <voidspace> but moving it out of *my* code is fine
[16:34] <voidspace> ;-)
[16:34] <dimitern> voidspace, well, we should talk to the provider with names/ids it understands
[16:34] <voidspace> dimitern: yep
[16:34] <dimitern> voidspace, but the translation needs to happen in the apiserver I think
[16:34] <voidspace> dimitern: I'll leave SpaceName on SpaceInfo but remove it from SubnetInfo
[16:35] <voidspace> agreed
[16:35] <dimitern> voidspace, we do have a similar case already - with relation tags
[16:35] <voidspace> ah right
[16:35] <dimitern> voidspace, since "relation-#" is the tag format but can't be converted to the other form "svc1:rel1 svc2:rel2", there's an api call to do that
[16:37] <dimitern> voidspace, I had a similar issue to solve with subnetsToZones - so the provisioner apiserver facade does the translation between juju id (cidr) and provider subnet id
[16:38] <dimitern> before calling start instance
[16:56] <fwereade> if anyone feels like punishing themselves, http://reviews.vapour.ws/r/3255/ is another monster :-/
[16:57] <fwereade> but it has a number of pleasing features, and a *lot* of it is moves/renames that both GH and RB are getting confused by
[17:03] <mup> Bug #1520314 opened: Environment not usable after upgrade from 1.21.3 -> 1.25.0 fails with '"cannot retrieve meter status for unit xxx/0: not found"' <sts> <juju-core:New> <https://launchpad.net/bugs/1520314>
[18:27] <voidspace> dimitern: ping
[18:30] <voidspace> dimitern: does MAAS support ipv6 subnets?
[18:33] <voidspace> dimitern: looks like it does, but you can only have one ipv6 subnet per interface
[18:33] <voidspace> dimitern: https://maas.ubuntu.com/docs/ipv6.html
[19:15] <frobware> voidspace, part me says land something with ipv4, then we can look at ipv6. thoughts?
[19:38] <voidspace> frobware: yeah, that's maybe good enough for now
[19:38] <voidspace> frobware: ipv6 needs careful thinking about
[19:38] <voidspace> frobware: in which case, it's done
[19:38] <voidspace> that was the last issue
[20:33] <thumper> morning team
[20:33] <thumper> I'm going to be looking into the instance poller and presense code today
[20:33] <thumper> as it appears to not be working, and will impact the HA ability
[20:33] <thumper> what I observed yesterday on EC2 with 1.25 was a working HA system, up until I took down machine-0
[20:34] <thumper> The machine was never shown as missing, and all the calls around HA, rsyslog worker etc that need the addresses of the state servers kept saying that machine-0 was good
[20:34] <thumper> and the workers would die because they couldn't connect to it
[20:35] <thumper> this also stopped all logs flowing from all machines as the rsyslog worker kept restarting
[20:35] <thumper> so, I'm going to check the logging that I can ratchet up for instance polling and presense, possibly add some extra, and try to reproduce
[21:05]  * thumper headdesks
[21:07] <thumper> oh FFS
[21:08] <thumper> the instance poller worker takes the "instance not found" error, and logs a warning, then returns nil for the error
[21:08] <thumper> it's all good
[21:30] <mgz_> heya thumper
[21:30] <thumper> o/ mgz_
[21:36] <davechen1y> thumper: kill.it.with.fire
[21:45] <mgz_> menn0: when you have a chance, can you look over bug 1520314?
[21:45] <mup> Bug #1520314: Environment not usable after upgrade from 1.21.3 -> 1.25.0 fails with '"cannot retrieve meter status for unit xxx/0: not found"' <sts> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1520314>
[22:08] <menn0> mgz_: will take a look soon
[22:13] <thumper> :(
[22:14] <thumper> that's how I feel reading this code
[22:14] <thumper> ugh
[22:15] <thumper> I have a very strong suspicition that this code may deadlock an agent coming down in some situations
[22:15] <thumper> because naken channel writes and reads are fine right?
[22:15] <thumper> no problem there

[22:21]  * thumper has to close up and go and move some kids
[22:21] <thumper> bbs
[22:27] <mup> Bug #1520373 opened: stopped instance in EC2 always considered "started" <juju-core:Triaged> <https://launchpad.net/bugs/1520373>
[22:47] <menn0> mgz_: you still around?
[22:51] <mgz_> menn0: yo
[22:52] <menn0> mgz_: looking at bug 1520314, your big comment implies you tried the upgrades yourself. is that right?
[22:52] <mup> Bug #1520314: Environment not usable after upgrade from 1.21.3 -> 1.25.0 fails with '"cannot retrieve meter status for unit xxx/0: not found"' <sts> <upgrade-juju> <juju-core:Triaged by menno.smits> <juju-core 1.25:In Progress by menno.smits> <https://launchpad.net/bugs/1520314>
[22:52] <menn0> or were you just looking at the logs?
[22:52] <mgz_> menn0: no, I read the logs
[22:52] <menn0> ok right
[22:53] <menn0> mgz_: do you know who the env was being deployed/
[22:53] <menn0> ?
[22:53] <mgz_> niedbalski tried to repo, but hit some other 1.23 weirdness in bug 1520292
[22:53] <mup> Bug #1520292: Upgrade from 1.21.3 -> 1.22.8 -> 1.23.3 fails with 'ERROR a hosted environment cannot have a higher version than the server environment: 1.23.3.5 > 1.22.8.1' <bug-squad> <sts> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1520292>
[22:53] <mgz_> menn0: dtag
[22:54] <menn0> mgz_: sorry, I meant "how" not "who"
[22:55] <mgz_> would have been deployer, in stages
[22:55] <menn0> mgz_: do I have access to that and are there any instructions you know of?
[22:56] <mgz_> no, reproducing the deployments we do on site is kind of an ongoing pain
[22:57] <mgz_> I have a google doc for dtag overall, not sure if it's the same steps we're looking at here though
[22:57] <menn0> ok fair enough
[22:57]  * menn0 digs
[22:58] <mgz_> I emailed you a link.
[22:58] <menn0> mgz_: thanks
[23:01] <thumper> mgz_: that bug 1520292 above, the only way that could occur is if there is juju 1.25 in the mix somehow
[23:01] <mup> Bug #1520292: Upgrade from 1.21.3 -> 1.22.8 -> 1.23.3 fails with 'ERROR a hosted environment cannot have a higher version than the server environment: 1.23.3.5 > 1.22.8.1' <bug-squad> <sts> <upgrade-juju> <juju-core:Triaged> <https://launchpad.net/bugs/1520292>
[23:01] <thumper> that error doesn't exist in the codebase before 1.25
[23:01] <mgz_> thumper: my assumption is the staged upgrade was continued, but it was in the borked-1.23 status
[23:02] <mgz_> so, we had agents on all different versions muddled up
[23:02] <thumper> I'm prepared to say won't fix for anything that goes through a 1.23 version
[23:02] <thumper> that was so full of ail
[23:02] <thumper> fail
[23:03] <mgz_> we don't seem to have communicated that well, even to the guys we have deploying stuff for customers
[23:03] <mgz_> (I agree, it's what I said in the bugs)
[23:05] <menn0> mgz_, thumper : we should get 1.23 out of the public streams
[23:05] <thumper> aye
[23:05] <menn0> there really is no reason to let anyone use it
[23:05] <mgz_> I was thinking about that, does it screw up existing deployments at all?
[23:07] <mgz_> we're already sticking tools in mongo in 1.23, are there any circumstances we go back to streams for the current tools version?
[23:08] <mgz_> the original promise was we never remove anything from streams once added.
[23:09] <menn0> mgz_: I guess it needs some thought by various people
[23:09] <mgz_> if only we were getting the relevent people in the same place some time soon
[23:11] <menn0> mgz_: shall we get it on the agenda then?
[23:12] <mgz_> :)
[23:17] <wallyworld> anastasiamac: standup?
[23:49] <davechen1y> thumper: i can apply the same methodology using build tags to exclude failing packages from building with go > 1.2 ?
[23:49] <davechen1y> s/building/testing
[23:49] <davechen1y> and get wily and xenial passing
[23:49] <davechen1y> then voting