[00:06] <katco> anastasiamac: http://reviews.vapour.ws/r/713/ when you have a moment :)
[00:06] <katco> anastasiamac: conceptually rather simple
[00:07] <anastasiamac> katco sorry with alexis. brb
[00:09] <katco> anastasiamac: no rush at all! ty!
[00:23] <wallyworld_> axw: could you please take a look at https://code.launchpad.net/~wallyworld/gomaasapi/testservice-additions/+merge/246237 when you get a chance
[00:25] <wallyworld_> thumper: got 2 minutes?
[00:58] <rick_h_> wallyworld_: anastasiamac reply inbound, please let me know if any of it doesn't make sense or you want to chat on a hangout or anything to help clear up my questions/concerns
[00:58] <anastasiamac> rick_h_: thnx! will look :)
[01:03] <axw> wallyworld_: looking
[01:03] <wallyworld_> ty
[01:04] <katco> wallyworld_: ty for the review; comments
[01:04] <wallyworld_> sure, np
[01:04] <axw> wallyworld_: do you know when ?op=deployment_status was added? since forever? (I can't see a mention in the MAAS 1.5 API docs)
[01:04] <katco> wallyworld_: just so there's no ambiguity: i left comments to your comments :)
[01:04] <wallyworld_> axw: i believe mid 2014, so 1.7 i *think*
[01:05] <axw> yeah, 1.7's docs are the first time it's mentioned
[01:05] <wallyworld_> maybe even latish 2014
[01:09] <anastasiamac> katco: reviewed too :)
[01:09] <katco> anastasiamac: ty!
[01:09] <anastasiamac> katco: it was a pleasure and I learn something new eveytime I read ur code!
[01:11] <axw> wallyworld_: reviewed
[01:11] <wallyworld_> ty
[01:18] <thumper> wallyworld_: hey
[01:19] <thumper> wallyworld_: just eating, catch up in a few minutes?
[01:19] <wallyworld_> thumper: i'll ping you in 15? just in a meeting
[01:20] <thumper> wallyworld_: kk
[01:20] <axw> dimitern: thanks very much for landing my branch
[01:53] <wallyworld_> thumper: free now, meet our 1:1?
[01:54] <thumper> sure
[02:02] <axw> anastasiamac: can you please take a look at https://github.com/juju/juju/pull/1394?
[02:03] <anastasiamac> axw: of course!
[02:03] <axw> the RB bot seems not to have picked it up
[02:03] <anastasiamac> axw: the bot is on lunch? :) it must b on BNE time. I'll look :)
[02:07] <anastasiamac> axw: reviewed :)
[02:14] <axw> anastasiamac: thanks
[02:15] <axw> it's pretty trivial, but I guess I should follow protocol
[02:15] <axw> wallyworld_: ^^ can you please review the review when you have a moment
[02:15] <wallyworld_> sure
[02:18] <wallyworld_> axw: lgtm, thanks for review board fixes also
[02:18] <axw> thanks
[02:41] <thumper> axw: addressed all your review comments: http://reviews.vapour.ws/r/688/diff/#
[02:41] <axw> thumper: I saw, thanks
[02:41] <axw> thumper: just noticed an import in the wrong block in environmentmanager.go
[02:41] <thumper> really?
[02:41]  * thumper looks
[02:42] <axw> juju/juju/version
[02:42] <thumper> fixed
[02:42] <thumper> axw: isn't the bot blocked?
[02:43] <axw> thumper: master was unblocked overnight, dunno about now
[02:43] <thumper> there is an open critical bug above
[02:43] <thumper> wwitzel3 was looking at it
[02:43] <axw> ah sou
[02:43] <thumper> a power64 failure
[03:13] <wallyworld_> axw: a small fix to maas test server, sigh
[03:14] <wallyworld_> https://code.launchpad.net/~wallyworld/gomaasapi/testservice-numeric-field-fix/+merge/246250
[03:14] <axw> looking
[03:15] <axw> wallyworld_: what triggered the error?
[03:16] <axw> wallyworld_: I see tests updated, but no new test that would trigger the problem before?
[03:17] <wallyworld_> axw: on the juju side, the start instance code was pulling out mem from the jsonobject map and assuming a float. so i had to modify the juju unit tests to construct  test server nodes with "memory": 8192 instead of "memory": "8192". this then triggered the testserver vreakage
[03:17] <axw> I see
[03:18] <axw> hence the assertion in the last line of the diff
[03:18] <axw> ok
[03:18] <axw> wallyworld_: approved
[03:18] <wallyworld_> yeah, basically making testserver behaviour match what we appear to get with real maas
[03:18] <wallyworld_> ty
[03:28] <wallyworld_> axw: and here's a one line juju fix (plus tests) http://reviews.vapour.ws/r/718/
[03:29] <wallyworld_> thumper: that critical bug is assigned to wayne, is anyone on your team looking at it per chance? you made an earlier comment on the bug, so i reckon last person who touches it, owns it :-)
[03:29] <thumper> no, not looking at it
[03:31] <axw> wallyworld_: reviewed
[03:32] <wallyworld_> great, ty. nfi how that slipped through previously :-(
[05:37] <wwitzel3> yeah I was looking at it, never figured it out
[05:38] <wwitzel3> I forgot to unassign it when I hit EOD
[06:15] <dimitern> axw, no worries
[07:04] <axw> dimitern: thanks for the review
[07:04] <dimitern> axw, np :)
[07:06] <dimitern> axw, I assume you've run the live tests as well, right?
[07:06] <axw> dimitern: yes
[07:06] <dimitern> axw, +1
[07:10] <dimitern> axw, btw with the diskmanager disable we won't see anymore errors like ERROR juju.worker.diskmanager lsblk.go:105 error checking if "sr0" is in use: open /dev/sr0: no medium found ?
[07:11] <dimitern> axw, perhaps it shouldn't even try on a local environment, when it's enabled again
[07:12] <axw> dimitern: that error is news to me, but you are correct, it won't do that anymore
[07:12] <dimitern> axw, I was meaning to file a bug about it, but I keep forgetting
[07:12] <axw> dimitern: we will/want to eventually have support for local, though it'll require more work for LXC
[07:13] <dimitern> axw, right, fair point
[07:13] <axw> dimitern: probably doesn't matter now, we can worry about it when it gets enabled without the feature flag and local has been looked into
[07:14] <dimitern> axw, yeah - I should've done the same about the networker - feature flag is better than dealing with upgrades and machine jobs set correctly
[07:14]  * dimitern needs to go apply for a SA visa - most likely back in time for standup
[07:19] <axw> wallyworld: re the disk parameters error message, I'd prefer not to change it because I want to keep the disk name in the error message. the "name" is just a number, so having "disk" before that gives context
[07:19] <wallyworld> axw: oh, i just meant the text description bit, the "to be created" part I think can go
[07:20] <axw> wallyworld: "cannot get parameters for disk %q" then?
[07:20] <wallyworld> yeah, i think so, more generic ad doesn't add surpurfulous context
[07:22] <axw> fair enough
[07:23] <axw> wallyworld: I don't really understand your other question
[07:23] <axw> wallyworld: what test are you looking for?
[07:24] <wallyworld> when a machine is created without disk params, that the get getDiskInfo() method or whatever it is called returns empty
[07:24] <wallyworld> a machineWithDisks was created specially to check that the params were recorded
[07:24] <wallyworld> but maybe other non machineWithDisk machines could be checked to ensure there is no info
[07:25] <wallyworld> just need to do it once somewhere
[07:25] <axw> wallyworld: yep, adding
[07:25] <axw> thanks
[07:25] <wallyworld> sure, sorry if it was too pedantic
[07:26] <axw> nope, that's fine
[07:32] <wallyworld> fwereade: can i ping you later for our 1:1 - i have soccer tonight
[08:39] <rvba> Hi wallyworld… it seems I missed your ping from yesterday… do you still need me?
[08:50] <TheMue> dimitern: morning. had a bad night and feeling weak now. so I'll step back into bed.
[08:51] <TheMue> dimitern: will have irc and mail open, so I later see what's happening
[08:51] <TheMue> dimitern: I sent you the mails we talked about in cc
[09:10] <voidspace> morning all
[09:25] <dimitern> TheMue, sure, I hope you get well soon! :/
[09:26] <dimitern> morning voidspace
[09:32] <voidspace> o/
[10:50] <dimitern> voidspace, you know what - let's split it up like this - i'll start from one end - network.InterfaceConfig, using it to render the lxc config, etc. while you start from the api call we need before startinstance
[10:51] <dimitern> voidspace, and we'll integrate it in the middle somewhere
[10:52] <dimitern> voidspace, how does this sound?
[11:16] <perrito666> morning
[11:17] <wallyworld> rvba: hey there, would love a quick chat if you are free
[11:17] <rvba> wallyworld: sure.  Hangout?
[11:17] <wallyworld> https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand
[11:41] <voidspace> dimitern: oops, missed your message
[11:42] <voidspace> dimitern: but yes, that sounds good to me
[11:42] <dimitern> voidspace, great, I'm already on it
[11:45] <voidspace> cool
[11:55] <voidspace> dimitern: name for the new API call?
[11:55] <voidspace> dimitern: ContainerInterfaceInfo ?
[11:56] <voidspace> dimitern: taking a slice of machine ids and returning a slice of InterfaceConfig
[11:56] <dimitern> voidspace, yeah, sounds good
[11:56] <voidspace> or slice of mappings of interface name to InterfaceConfig
[11:56] <dimitern> voidspace, although..
[11:57] <voidspace> if the InterfaceConfig includes name then strictly a mapping is redundant
[11:57] <dimitern> voidspace, while doing the changes around the network package I realized we don't need network.InterfaceConfig
[11:57] <voidspace> especially as they'll only have one entry initially
[11:57] <voidspace> dimitern: just extend SubnetInfo ?
[11:57] <dimitern> voidspace, we might use network.Info, rename it to InterfaceInfo and extend it to include what we need
[11:57] <voidspace> InterfaceInfo or SubnetInfo
[11:58] <dimitern> voidspace, SubnetInfo is very basic, doesn't contain NIC-specific settings, like whether it's auto-start, extra config (e.g. pre-up/post-down rules, routes, etc.)
[11:58] <voidspace> dimitern: ok
[11:59] <voidspace> hmm...
[11:59] <voidspace> although as we fleshed it out earlier we didn't discuss including that info anyway
[11:59] <dimitern> voidspace, so let's say the ProvisionerAPI.ContainerInterfaceInfo(ids) []result struct { *error, []results }
[11:59] <voidspace> dimitern: ok, that's enough to be getting on with
[12:00] <voidspace> I can hassle you for details as I go
[12:00] <dimitern> voidspace, yeah, but as long as I'm changing it I want to include the necessary fields so we can model: physical interfaces, vlans (both of these we can, but partially), bridges, static addresses, etc.
[12:00] <voidspace> dimitern: ok
[12:00] <dimitern> voidspace, sure, I hope to propose it soon
[12:02] <voidspace> dimitern: I'll be going on lunch before the MAAS call, so I'll have made a start by then, but probably only a start
[12:02] <dimitern> voidspace, no worries
[12:03] <voidspace> dimitern: params.Entities as the arg type?
[12:03] <voidspace> and check each of the entities maps to a container machine
[12:03] <dimitern> voidspace, that's right, cheers
[12:11] <perrito666> mm, CI still locked?
[13:42] <dimitern> voidspace, http://reviews.vapour.ws/r/719/ - there it is; still live testing, but just as a precaution - I don't expect anything to break
[14:03] <dimitern> voidspace, ping for maas meeting
[14:03] <voidspace> dimitern: omw
[14:35] <voidspace> dimitern: LGTM on your PR
[14:36] <dimitern> voidspace, thanks!
[14:36] <dimitern> voidspace, so did it get clearer as you looked at the changes?
[14:36] <dimitern> voidspace, I mean the network.InterfaceInfo and how we'll used it around StartInstance?
[14:37] <voidspace> dimitern: yeah, I think it's pretty clear
[14:37] <voidspace> dimitern: what I'm not clear on is in the ProvisionerAPI how I get to the environ to do the address allocation
[14:38] <voidspace> dimitern: we have a state though
[14:39] <dimitern> voidspace, you can always create an environ from config
[14:39] <voidspace> dimitern: is that the *right* thing to do?
[14:40] <dimitern> voidspace, I can't think of another way actually
[14:40] <voidspace> dimitern: ok, cool
[14:40] <voidspace> thanks
[14:40] <dimitern> voidspace, we "create" an instance of the Environ interface, not an actual environment
[14:40] <dimitern> voidspace, np
[14:40] <voidspace> right
[14:41] <voidspace> coffee!
[15:09] <perrito666> wwitzel3: ericsnow and I are feeling lonely at the standup :p
[15:25] <bodie_> any chance we can get https://github.com/juju/juju/pull/1399 merged?  it's arguably something that should not be in a tagged version
[15:26] <bodie_> just a spurious file in project root
[15:26] <bodie_> i.e. nails on the chalkboard of my mind
[15:44] <dimitern> oh boy, what a fat panic - bug 1410320
[15:44] <mup> Bug #1410320: juju status --format summary panics with unresolvable IPs <cmdline> <network> <panic> <status> <juju-core:Triaged> <https://launchpad.net/bugs/1410320>
[15:44] <perrito666> wha?
[15:44] <dimitern> katco, hey, you might want to have a look at that ^^
[15:44] <perrito666> btw, is anyone looking at the current blocker?
[15:45] <dimitern> perrito666, wasn't wwitzel3 doing this?
[15:45] <perrito666> dimitern: I cannot get a hold of wayne, that is why I ask
[15:45] <perrito666> he is assigned
[15:45] <dimitern> perrito666, ah, I see
[15:46] <dimitern> perrito666, well, I'll have a look to see if it's just a map ordering ppc issue that can be fixed easily
[15:46] <perrito666> its not, thumper and I took a look at it yesterday
[15:48]  * perrito666 tries to separate unit and agent as entities and cries a little bit
[15:51] <voidspace> rebooting due to spotify killing all window chrome
[15:51] <voidspace> rebooting via the command line...
[15:51] <voidspace> BRB
[15:51] <dimitern> wow
[15:52] <dimitern> perrito666, the whole TestSetMembersErrorIsNotFatal does not make much sense to me
[15:53] <dimitern> rogpeppe1, hey, can you give a hand to figure out why this test is failing on ppc ? https://bugs.launchpad.net/juju-core/+bug/1409827
[15:53] <mup> Bug #1409827: TestSetMembersErrorIsNotFatal fails <ci> <intermittent-failure> <regression> <test-failure> <juju-core:Triaged by wwitzel3> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1409827>
[15:53] <voidspace> and back
[15:53] <rogpeppe1> dimitern: i'll try to have a look later
[15:54] <perrito666> voidspace: google chrome?
[15:54] <voidspace> perrito666: no, unity chrome
[15:54] <perrito666> ah
[15:55] <voidspace> as in - all windows stop responding to mouse clicks
[15:55] <dimitern> rogpeppe1, cheers
[15:55] <voidspace> some handler in spotify doesn't return and unity blocks
[15:55] <perrito666> voidspace: yep, but browser
[15:55] <voidspace> perrito666: no
[15:55] <perrito666> voidspace: ah you are actually using spotify app
[15:55] <voidspace> perrito666: yep
[15:56] <dimitern> sinzui, hey, can you clarify which merge by frankban you refer to in https://bugs.launchpad.net/juju-core/+bug/1409827/comments/8 - I can see one landing just after voidspace's maas branch
[15:56] <mup> Bug #1409827: TestSetMembersErrorIsNotFatal fails <ci> <intermittent-failure> <regression> <test-failure> <juju-core:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1409827>
[15:56] <voidspace> dimitern: is this something I should be concerned about / look at?
[15:56] <voidspace> dimitern: might I have killed CI?
[15:57] <wwitzel3> I was looking at that yesterday, but never got a suitable testing environment setup.
[15:57] <dimitern> voidspace, unlikely :) you change has nothing to do with replica sets or peergrouper
[15:57] <voidspace> hah
[15:57] <voidspace> I shall remain blissfully unaware then...
[15:57] <wwitzel3> perrito666: I told you Sunday night I was going to be missing standup and out during the AM eastern time. You're bad at this.
[15:57] <ericsnow> wwitzel3: lol
[15:58] <dimitern> bad, bad perrito666
[15:58] <perrito666> wwitzel3: that was today :p
[15:58] <dimitern> ;)
[15:58] <perrito666> wwitzel3: sorry since I saw you assigned to the bug I assumed you where immersed in it
[15:58] <perrito666> wwitzel3: I do my best to mimick nate's scheduling ability
[15:58] <perrito666> :p
[15:58] <sinzui> dimitern, the bug was changed to non-critical, unblocking CI. which was wrong given we still have a failing test that needs to pass
[15:59] <dimitern> sinzui, I agree
[16:00] <wwitzel3> dimitern, rogpeppe1: thumper suggested I try using gccgo , which I did, but I couldn't replicate the bug with just that.
[16:00] <dimitern> sinzui, just trying to figure out comment #8
[16:00] <wwitzel3> and i was never able to successfully get a ppc64el vm up and going
[16:02] <perrito666> bbl lunch
[16:02] <sinzui> dimitern, 1.22 was created from the previous commit. wallyworld merged the suspect commit into 1.22 as a pre-requisite for his branch...and then introduced the same bug into the other line of development. We we reported this bug, we were testing both voidspace's and frankban's commits. Since 1.22 failed without frankban's changed, the only suspect is commit 782e9cd
[16:03] <sinzui> s/We we/when we/
[16:04] <dimitern> thanks sinzui
[16:07] <dimitern> jw4, hey, you mentioned you can reproduce that bug relatively easily? ^^
[16:08] <dimitern> i have an inkling..
[16:08] <voidspace> dimitern: do share
[16:09] <dimitern> at first glance the changes introduced by TheMue with https://github.com/juju/juju/commit/d5fd5e032b6593b53d03244773b9c7ac65805fd0#diff-1 *might* cause patched timeouts to get unpatched incorrectly
[16:09] <dimitern> s.PatchValue shouldn't be used in a loop or in any case more than once
[16:09] <dimitern> once per test, that is
[16:09] <mgz> dimitern: oh, interesting
[16:10] <dimitern> mgz, yeah, I had issue with that before - for loops and such, gitjujutesting.PatchValue() should be used, which returns a Restorer that can be called at the end of the loop
[16:11] <dimitern> I'm not saying this is the issue (other tests in there are written similarly and should fail intermittently the same way), but it might be related
[16:12] <katco> dimitern: hey just saw your message, i'll look into it, thanks
[16:12] <dimitern> katco, np
[16:13] <katco> dimitern: it looks like i gave a good warning but then ignored my own warning :)
[16:13] <dimitern> katco, ha :) good catch then
[16:15] <bodie_> dimitern, mgz, sinzui -- any of you know whether / how / whom to talk to about getting a single file single line PR through?  we have a stupid foo.yaml file sitting in our project root
[16:16] <dimitern> bodie_, I can have a look
[16:16] <bodie_> dimitern, not certain how we didn't catch that in review but there it is.  https://github.com/juju/juju/pull/1399
[16:16] <mgz> bodie_: you can just get it reviewed then force it through
[16:17] <bodie_> mgz, okay, cool.  it's reviewed, and I can just land it then?  I thought we were frozen
[16:18] <mgz> I'm fine with a trivial change on trunk - give me ten mins though, I'll confer and land it if agreed. what's the pr #?
[16:18] <bodie_> mgz, 1399
[16:18] <mgz> ta
[16:18] <dimitern> bodie_, LGTM, once ci is unblocked please land it
[16:18] <bodie_> thanks :)
[16:48] <mgz> bodie_: am going to leave ci idle for now rather than landing your change through the block, in case we want to try out a change to the test along dimiter's proposed patch thing on ci to see if it helps
[17:03] <dimitern> mgz, what proposed patch?
[17:05] <voidspace> dimitern: ping
[17:06] <dimitern> voidspace, pong
[17:06] <voidspace> dimitern: so the new api method
[17:06] <voidspace> dimitern: for every machine tag requested it should allocate an address, set it on the machine, construct an interface info result
[17:07] <voidspace> dimitern: will this api only be called for machines that support address allocation?
[17:07] <voidspace> dimitern: (that information can be added to the ManagerConfig)
[17:07] <mgz> dimitern: a speculative future proposed patch
[17:08] <voidspace> dimitern: SupportAddressAllocation takes a netId - so should we call environ.Subnets(), then pick a network, check environ.SupportAddressAllocation(netId) (continuing until we find one and erroring if we don't)
[17:08] <dimitern> voidspace, so, the environ supports address allocation or not strictly speaking
[17:08] <voidspace> dimitern: allocating an address on the first network we find that supports address allocation
[17:08] <voidspace> dimitern: it is implemented as taking a netId
[17:08] <voidspace> dimitern: but ok, fair enough (and yes!)
[17:08] <dimitern> mgz, unfortunately I won't be around long enough to try a fix :/
[17:09] <voidspace> dimitern: but are we assuming this api will only be called where the environ does support address allocation
[17:09] <voidspace> dimitern: and how do I pick the right subnet?
[17:09] <dimitern> voidspace, the apiserver should check the given machine tag is a container tag - also it should be allowed only to pass containers with have the agent machine as parent
[17:10] <voidspace> dimitern: I'm checking ContainerType
[17:11] <dimitern> voidspace, i think the code path is roughly speaking like this: 1) if env.SupportAddressAllocation() { <case1> } else { return nil and some error - e.g. NotSupportedf }
[17:11] <voidspace> dimitern: do you know how I find the agent machine - or should I hunt around?
[17:11] <dimitern> voidspace, sure - that's the tag used at login
[17:11] <voidspace> cool and cool
[17:11] <voidspace> yes, check address allocation is supported or error
[17:11] <voidspace> great
[17:12] <dimitern> voidspace, like authorizer.GetAuthTag() ...
[17:12] <dimitern> voidspace, then, for <case1>, let me think a bit
[17:12] <voidspace> :-)
[17:13] <dimitern> voidspace, you need a netId to pass to SupportAddressAllocation
[17:13] <voidspace> yep
[17:13] <dimitern> voidspace, but that's ignored for now, right?
[17:14] <voidspace> as far as I know
[17:14] <voidspace> I'd have to check
[17:14] <voidspace> for example, for MaaS we *could* check if the specified netId has a static range
[17:14] <voidspace> but we don't yet
[17:14] <dimitern> voidspace, right, so we should keep this in mind, but for now let's ignore it
[17:15] <voidspace> sure
[17:15] <dimitern> voidspace, we need to allocate an address from the same subnet as the container's host's primary NIC
[17:15] <dimitern> (what a mouthful)
[17:16] <perrito666> back
[17:17] <voidspace> dimitern: ok...
[17:17] <dimitern> voidspace, so what *i think* we need is to get the subnets for that host, then pick a reasonable one
[17:17] <voidspace> dimitern: the one containing the primary IP of the host for example
[17:18] <dimitern> voidspace, right
[17:18] <voidspace> so long as the state addresses for the machine are correct that should work
[17:18] <dimitern> voidspace, all this is assuming a bit too much, but until we have per-service-endpoint bindings in place we could improvise
[17:19] <dimitern> voidspace, strictly speaking (as per the model) we should create 1 container NIC per host NIC
[17:19] <voidspace> right
[17:19] <voidspace> but not yet
[17:19] <dimitern> voidspace, but that's too much for now
[17:20] <dimitern> voidspace, yeah
[17:21] <dimitern> voidspace, let's decide which subnet later, for now just pick the first allocatable one
[17:21] <dimitern> voidspace, and please add a comment this is temporary
[17:21] <dimitern> voidspace, it should allow us to do an end-to-end simple test with 1 subnet
[17:21] <voidspace> dimitern: hah, ok
[17:21] <voidspace> that's easy enough
[17:21] <voidspace> thanks
[17:22] <dimitern> voidspace, cool
[17:22]  * dimitern should be going now
[17:23] <voidspace> dimitern: o/
[17:45] <jw4> dimitern: sorry - was at the gym... Yes I can repro consistently.  I'll check with katco and your github repo and see if you have any proposed fixes I can test
[17:46] <jw4> katco: are you picking up what dimitern was working on with the peergrouper test failures?
[17:47] <katco> jw4: sorry, this is the first i'm hearing about this
[17:47] <jw4> katco: no worries - I just thought I saw you respond to dimitern when he was talking about it
[17:47] <katco> jw4: are you perhaps thinking of the bug dimitern pointed me at? https://bugs.launchpad.net/juju-core/+bug/1410320
[17:47] <mup> Bug #1410320: juju status --format summary panics with unresolvable IPs <cmdline> <network> <panic> <status> <juju-core:Triaged by cox-katherine-e> <https://launchpad.net/bugs/1410320>
[17:47] <jw4> hmm; if that was the bug he was talking about I'm confused
[17:47] <mgz> there were a couple of conversatons happening at once :)
[17:48] <katco> jw4: i think you... mgz has it :)
[17:48] <jw4> katco: :)
[17:48] <katco> jw4: i would continue with your assumptions, but i don't think i was ever involved in the peergrouper stuff
[17:48] <jw4> kk
[17:49] <mgz> dimitern mentioned that the patch in the test at issue in bug 1409827 was dodgy, but didn't have a proposed fix
[17:49] <mup> Bug #1409827: TestSetMembersErrorIsNotFatal fails <ci> <intermittent-failure> <regression> <test-failure> <juju-core:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1409827>
[17:49] <jw4> mgz: yeah that was the one.  I have been able to consistently repro that one on my box since august
[17:49] <jw4> mgz: I just ignored it since the CI wasn't getting it, and other folks weren't seeing it
[17:50] <jw4> (maybe even before august, that's just how far back my test run captures go)
[17:51] <mgz> jw4: you have a talent for getting some of these inconsitent failures...
[17:51] <jw4> mgz: lol
[17:51]  * jw4 whispers:  I'm using a hyper-v ubuntu VM on top of -- windows 8.1 --- (gasp)
[18:04]  * jw4 notes the stunned silence with some chagrin
[18:06] <mgz> jw4: I barely even giggled
[18:06] <jw4> now I'm crestfallen instead of chagrined
[18:06] <mgz> does support the general theory of timing relatedness, can't imagine that's the smoothest setup
[18:09] <jw4> yeah.  It looks like no-one is assigned to it anymore.  I'll investigate a bit
[18:35] <voidspace> g'night all
[18:35] <voidspace> EOD
[19:32] <perrito666> wallyworld: drop me a ping whenever you return please
[19:37] <bodie_> mgz, still around?  I'm having a mental merge conflict between your and dimitern's directives about pr 1399 :P
[20:18] <thumper> cmars: hey there
[20:18] <thumper> cmars: are you ok if we skip today's call?
[20:18] <thumper> cmars: I've got a heap to do
[20:19] <perrito666> wallyworld: ping me whenever you are here
[20:20] <thumper> cmars: oh, noticed that you had declined today's meeting anyway, so we're all good :)
[20:54] <menn0> katco: ping?
[20:54] <katco> menn0: hey what's up?
[20:55] <menn0> katco: I'm in the process of making some machine agent workers run per environment (in a multi-env Juju server)
[20:55] <menn0> katco: is the lease manager a global thing or a per environment thing
[20:55] <menn0> katco: i'm guessing the former, but want to be sure
[20:56] <katco> menn0: per environment
[20:56] <katco> menn0: the way to think about it is it has a 1:1 relation with a state server
[20:56] <menn0> katco: ok, but the state server is shared by multiple environments...
[20:57] <katco> menn0: maybe i'm misunderstanding the word "environment"
[20:57] <menn0> katco: i'll give some background
[20:57] <katco> menn0: if the state server is shared, then the lease server will be shared
[20:58] <menn0> so we're almost at a place where one state server (or set of replicated state servers) will be able to support multiple, independent Juju environments
[20:58] <katco> menn0: ah gotcha... i think i know what this is in reference to
[20:59] <menn0> so if you already have a state server up, you can issue a command to add another environment which can then have its own machines, services, charms etc
[20:59] <katco> menn0: since the leases are stored in mongo, and the lease server is running on the state server, i think the answer is "global" in your case
[21:00] <katco> s/lease server/lease manager/g
[21:00] <katco> menn0: the spec has a runtime-components diagram which should help
[21:00] <menn0> katco: ok great.
[21:00] <menn0> katco: I had forgotten about that diagram. I'll take another look just to make sure I've got it straight.
[21:00] <menn0> katco: thanks!
[21:01] <katco> menn0: please feel free to ping me with any follow-up questions if it's at all unclear
[21:03] <menn0> katco: grr... I can't find the spec, only the text source
[21:03] <katco> menn0: np one sec
[21:03] <katco> menn0: https://drive.google.com/open?id=0B24olKDYt1DQa1piUFhSMGpuWjA&authuser=1
[21:05] <menn0> katco: thanks. searching for "lease", "lease service" etc on google drive doesn't find it.
[21:06] <katco> menn0: i wonder if i can tag documents
[21:07] <katco> menn0: strange, the title is lease-service.org.pdf
[21:07] <menn0> katco: yeah I know. pretty crap.
[21:11] <menn0> katco: this is a bit of an aside, but we need to make sure that lease service use cases that are env specific are isolated from each other (e.g. presence)
[21:11] <menn0> katco: I guess we just need to make sure sensible ids - which include the env UUID - are used
[21:12] <katco> menn0: the lease service itself is completely agnostic to what it's asked to store
[21:12] <katco> menn0: it provides a namespace functionality, so that can easily be used for presencer etc.
[21:12] <menn0> katco: yep I see that. I'm just think out loud about what it's going to be used for.
[21:12] <katco> menn0: your comments are appreciated :)
[21:13] <menn0> katco: all good. i'm now sure that the lease service worker just needs to run once, not per env.
[21:14] <menn0> katco: and that's what I need to know right now.
[21:14] <katco> menn0: good to hear :) you might also be interested: i pulled MachineAgent into its own exportable package, so you can now write unit tests against them
[21:14] <katco> menn0: without spinning up a jujud
[21:15] <menn0> katco: yep, I'd already seen that. very helpful - thanks!
[21:15] <katco> :)
[21:16] <menn0> katco: at some point I'd like to unpick the upgrade-steps work and extract it to the worker part of the tree for similar reasons. it's currently quite coupled to the machine agent.
[21:17] <katco> menn0: right; i think the agent shouldn't know anything about what it's running. it should just loop over a slice of functions it got passed
[21:17] <katco> menn0: i'm currently spending my fridays picking apart jujud/*
[21:18] <menn0> katco: well hopefully what I'm about to do doesn't upset your efforts
[21:18] <menn0> katco: i'm about to reorganised how some workers get started
[21:18] <katco> menn0: thank you for your concern. i wouldn't worry about it though... your work should take precedent
[21:18] <katco> menn0: mine will take awhile i think
[21:19] <menn0> katco: i'm hoping that my changes will also leave the machine agent a code a little clearer than it was
[21:20]  * katco cheers on menn0 
[22:26]  * thumper groans
[22:28] <wallyworld> perrito666: hey, just about to have short meeting, will ping you soon
[22:35] <perrito666> wallyworld: going out for a walk if I dont answer ill ping you back :p
[22:35] <wallyworld> sure :-)
[22:41] <thumper> oh fark
[22:42] <thumper> menn0: unit agents don't run upgrades do they?
[22:43] <thumper> menn0: I need to upgrade the unit agents config files...
[22:44] <thumper> wallyworld: did you know about this failure? http://juju-ci.vapour.ws:8080/job/run-unit-tests-precise-i386/1239/console
[22:44] <thumper> wallyworld: I recall you doing some maas constraints stuff
[22:45] <wallyworld> thumper: in a meeting now with sinzui etc discussing such things
[22:45] <thumper> wallyworld: kk
[23:00] <wallyworld> thumper: we have 4 unit test failures blocking CI. we'll look into the MAAS i386 one. can someone on your team look at bug 1236471?
[23:00] <mup> Bug #1236471: Sporadic test failure w/ bot inside Uniter: FilterSuite.TestUnitRemoval <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1236471>
[23:01] <wallyworld> that was reported om oct 2013 so is an old failure
[23:01] <wallyworld> sorta dormant till now
[23:01]  * thumper sighs
[23:01] <thumper> wallyworld: who is looking at the power one?
[23:02] <wallyworld> which one are you thinking of?
[23:02] <wallyworld> bug 1410556 ?
[23:02] <mup> Bug #1410556: TestStartInstanceUnmetConstraints fails on 386 and ppc64el <ci> <i386> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1410556>
[23:02] <thumper> maybe...
[23:02]  * thumper looks
[23:02] <wallyworld> that one i'll look at
[23:03] <wallyworld> i copied an existing test and added a constraint to start instance so i have nfi why it fails
[23:03] <thumper> https://bugs.launchpad.net/juju-core/+bug/1409827
[23:03] <mup> Bug #1409827: TestSetMembersErrorIsNotFatal fails <ci> <intermittent-failure> <regression> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1409827>
[23:03] <wallyworld> i was going to ping nate about that one
[23:03] <thumper> wallyworld: nate is on paternity leave no?
[23:03] <wallyworld> ah yes
[23:04] <wallyworld> he was going to fix the repliaset related "unit" tests
[23:04] <wallyworld> but i don't hink that's been done yet
[23:04] <wallyworld> any mongo related peer group / replica set tests are horrible
[23:04] <wallyworld> if you wanted to fix that one instead.....
[23:05] <wallyworld> we just need to start dividing up the work as it's got to the point where i won't be able to do it all in a timely fashion, as i'm also fixing functional issues for 1.22
[23:06] <thumper> wallyworld: menno was going to look at bug 1409827
[23:06] <mup> Bug #1409827: TestSetMembersErrorIsNotFatal fails <ci> <intermittent-failure> <regression> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1409827>
[23:06] <wallyworld> \o/ that would be great
[23:06] <wallyworld> ty
[23:06] <thumper> I'll take the intermittent failure after I've finished fixing this branch
[23:06] <wallyworld> sure, i'll do the maas i396 one
[23:07] <wallyworld> and there's another different i386 one also
[23:07] <wallyworld> bug 1408762
[23:07] <mup> Bug #1408762: --constraints option is ignored on MaaS provider <constraints> <maas-provider> <juju-core:In Progress by wallyworld> <juju-core 1.22:Fix Committed by wallyworld> <https://launchpad.net/bugs/1408762>
[23:07] <wallyworld> ah no
[23:07] <wallyworld> pingerSuite tests consistenly fail on trusty 386
[23:07] <wallyworld> whatever the right number is
[23:08] <wallyworld> i'll see if my team can get that one fixed
[23:09] <wallyworld> sinzui: bug 1410556 shouldn't apply to 1.33 yet as the change has only been committed to 1.22
[23:09] <mup> Bug #1410556: TestStartInstanceUnmetConstraints fails on 386 and ppc64el <ci> <i386> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1410556>
[23:17] <sinzui> wallyworld, I will fix that
[23:17] <wallyworld> sinzui: i just did
[23:18] <sinzui> :)
[23:18] <wallyworld> just letting you know
[23:28] <perrito666> wallyworld: back
[23:29] <wallyworld> perrito666: give me 2 minutes
[23:29] <perrito666> np
[23:32] <wallyworld> perrito666: did you want a hangout? https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand
[23:52] <wallyworld> sinzui: fix for bug 1410556 just merged into 1.22
[23:52] <mup> Bug #1410556: TestStartInstanceUnmetConstraints fails on 386 and ppc64el <ci> <i386> <ppc64el> <regression> <test-failure> <juju-core 1.22:In Progress by wallyworld> <https://launchpad.net/bugs/1410556>