[00:06] <redelmann_> thumper, https://bugs.launchpad.net/juju-core/+bug/1438951
[00:06] <mup> Bug #1438951: destroy-enviroment --force destroy all aws instances <juju-core:New> <https://launchpad.net/bugs/1438951>
[00:07] <redelmann_> thumper, y cant give more info about this, and i cant test it again in current aws
[00:07] <thumper> redelmann_: thanks
[00:11] <ericsnow> wallyworld_: regarding IsLocal, the key thing is that we are checking if the init system corresponding to the package is the one running on the *local* host
[00:12] <wallyworld_> IsLocal() doesn't imply that
[00:12] <ericsnow> wallyworld_: in code it won't be "IsLocal()", it will be "systemd.IsLocal()", etc.
[00:12] <ericsnow> wallyworld_: k
[00:12] <wallyworld_> hmmm
[00:12] <wallyworld_> IsUsed() perhaps
[00:12] <wallyworld_> systemd.IsUsed()
[00:13] <wallyworld_> wadda ya reckon?
[00:13] <ericsnow> wallyworld_: IsCurrentlyTheOneRunningOnTheLocalHost? :)
[00:14] <wallyworld_> always one smarty pants
[00:14] <wallyworld_> systemd.IsRunning()
[00:14] <wallyworld_> what happens if both files are there though?
[00:14] <ericsnow> wallyworld_: IsRunning is good
[00:14] <wallyworld_> can a host have both binaries installed?
[00:14] <ericsnow> wallyworld_: the Ubuntu guys promised me that that will *never* be a problem
[00:15] <ericsnow> :)
[00:15] <wallyworld_> promises, yeah right
[00:16] <ericsnow> wallyworld_: how about IsRunningLocally?  too long?  I was hoping that it would be super clear that we're only checking the local host
[00:16] <ericsnow> wallyworld_: BTW, thanks for the review :)
[00:16] <wallyworld_> how is IsRunning() not implying that?
[00:17] <wallyworld_> sure :-)
[00:17] <wallyworld_> wouldn't it be assumed that it pertains to the host on which it is run?
[00:17] <ericsnow> wallyworld_: yeah, you're right; the lack of args limits the possible interpretations :)
[00:21] <wallyworld_> ericsnow: i started reviewing the other one. i don't understand why ioutil.TempFIle is not used
[00:21] <wallyworld_> wasn't the issue that we were using /tmp/somestaticfilename
[00:22] <wallyworld_> so /tmp/somerandomfilename would be fine
[00:22] <ericsnow> wallyworld_: right
[00:22] <ericsnow> wallyworld_: but I want to be consistent between the shell script and the Go code
[00:22] <wallyworld_> what do you mean?
[00:23] <ericsnow> wallyworld_: the patch fixes both the discovery script (written in bash for remote use) and the Go code that does the same thing
[00:24] <wallyworld_> both can still use a random tmp dir
[00:25] <ericsnow> wallyworld_: I looked at using mktemp on the shell side to do that, but doing it made the code (not insignificantly) more complicated
[00:26] <wallyworld_> hmmm
[00:26] <ericsnow> wallyworld_: using a subdirectory of the juju data dir helped it stay simple (er)
[00:27] <wallyworld_> i'm not across the implementation, why is using mktemp in the shell script hard?
[00:29] <ericsnow> wallyworld_: because subsequent commands must be written in relation to where that script gets written
[00:30] <wallyworld_> what generates the shell script? juju? why can't the dir be a script arg?
[00:31] <ericsnow> wallyworld_: juju does (it's all in service/discover.go + service.go)
[00:35] <mup> Bug #1438951 was opened: destroy-enviroment --force destroy all aws instances <juju-core:New> <https://launchpad.net/bugs/1438951>
[00:36] <ericsnow> yikes, mup!
[00:37] <wallyworld_> ericsnow: so, i'm missing something maybe. in ListServicesScript(), we have const filename = "/tmp/discover_init_system.sh, and that is passed to the script generation commands. why not just generate a random file and pass that
[00:40] <ericsnow> wallyworld_: so just generate a random filename under /tmp and use that on the remote host?
[00:40] <wallyworld_> oh you are generating the scripts locally and then running remotely
[00:41] <ericsnow> wallyworld_: correct
[00:42] <ericsnow> wallyworld_: currently all for the sake of manual provider :)  though I have considered using some of this for the cloudinit script
[00:42] <wallyworld_> can init_system=$(blah) run the script inline then?
[00:42] <wallyworld_> rather than from a filename
[00:42] <wallyworld_> then the problem goes away
[00:43] <ericsnow> wallyworld_: perhaps
[00:44] <wallyworld_> this is a one off bit of bash I think?, so no need to write it out i don't think
[00:44] <ericsnow> wallyworld_: originally that is effectively what I was doing, but I had to add more complicated logic to the script, I switched to writing the script out to a separate file
[00:45] <wallyworld_> bash doesn't care about how complicated it is when it runs it
[00:45] <ericsnow> wallyworld_: having a separate script certainly requires much less knowledge of bash :)
[00:45] <ericsnow> wallyworld_: it's more for my and our sakes than for bash's :)
[00:46] <ericsnow> wallyworld_: biab
[00:46] <wallyworld_> ok
[01:00] <ericsnow> wallyworld_: back
[01:01] <ericsnow> wallyworld_: did I mention that bash is the devil?
[01:01] <wallyworld_> yeah
[01:01] <wallyworld_> i like tcsh
[01:02] <ericsnow> wallyworld_: hey, isn't Python system-installed everywhere nowadays...
[01:02] <wallyworld_> so ioutil.TempFile is nice and portable, and simple to use, none of this data dr and series stuff andeep juju's dir clean
[01:02] <wallyworld_> can't count on it maybe? not sure?
[01:04] <wallyworld_> the script is only 20 lines or so, so surely it can be run inlien and all this just goes away
[01:04] <ericsnow> wallyworld_: I'll work up a different approach in that patch
[01:05] <wallyworld_> ok, sorry to be a party pooper, it's just my opinion, maybe a different reviewer would feel differently
[01:05] <ericsnow> wallyworld_: after that other patch you reviewed the script gets a lot shorter and simpler
[01:05] <wallyworld_> yeah
[01:05] <ericsnow> wallyworld_: I'll get that landed and see where things are at
[01:05] <wallyworld_> i was having a big problem with the various g=hacks also
[01:05] <ericsnow> wallyworld_: and hey, I appreciate you taking a stand :)
[01:05] <wallyworld_> and this will remove those
[01:59] <axw> wallyworld_: just gave you a shipit
[02:00] <wallyworld_> axw: awesome, ty
[02:16] <ericsnow> wallyworld_: I think you'll like this better: http://reviews.vapour.ws/r/1351/
[02:16] <wallyworld_> ok
[02:20] <wallyworld_> ericsnow: yay, looks much better
[02:20] <wallyworld_> it works?
[02:20] <ericsnow> wallyworld_: thanks for sticking to your guns :) I like it better too
[02:20] <ericsnow> wallyworld_: yep
[02:20] <wallyworld_> \o/
[02:20] <ericsnow> wallyworld_: there's a unit test that checks
[02:20] <wallyworld_> yeah, much cleaner
[02:26] <ericsnow> wallyworld_: thanks for those reviews
[02:27] <wallyworld_> np :-)
[02:28] <ericsnow> wallyworld_: I'll land those for 1.23 now-ish and then vanish into the night
[02:28] <wallyworld_> sure
[02:28] <wallyworld_> i'll re-merge if there's a spurious failure
[02:28] <wallyworld_> ty
[03:30] <thumper> wallyworld_: got a few minutes?
[03:30] <thumper> wallyworld_: I need an insanity check
[03:31] <wallyworld_> ok, give me 2 mins
[03:31] <thumper> kk
[04:34] <wallyworld_> axw: would be awesome if you got to look at this sometime if you are free http://reviews.vapour.ws/r/1354/
[04:35] <axw> wallyworld_: sure, will look in a little while
[04:35] <wallyworld_> np ty, i have to duck out for an hour or so anyway
[04:35] <wallyworld_> soon
[05:11] <jog> wallyworld_, still around?
[05:49] <jam> wallyworld_: ping
[06:10] <axw> wallyworld_: I'll have to read the spec before I can review, so probably won't get to it until I'm OCR tomorrow sorry
[06:37]  * dimitern steps out - back in a few hours
[07:29] <wallyworld_> jam: hi, i was out, about to have dinner, can i ping you soon?
[07:30] <jam> wallyworld_: np
[07:30] <jam> I was just checking if there was anything you want me to bring up to Mark.
[07:31] <jam> I know you had the Storage "hints" vs "properties"
[07:31] <jam> wallyworld_: but any other information about things you want clarity on for Storage or Health, I'm on the call with him now
[07:45] <TheMue> morning o/
[07:46] <TheMue> dimitern: I'll be working in an interrupted way today. coordinating cleaning of the ground, expecting the reviewer, calling the garage to pick the car etc
[07:46] <voidspace> TheMue: sorry about your car!
[07:46] <voidspace> dimitern: I've made those tweaks
[07:47] <voidspace> dimitern: I'll land the branch so I can fix the issue with upgrading and then forward port the fix to trunk
[07:48] <TheMue> dimitern: regarding the fix imho our tests showed that it isn't the primary interface. so now I'll focus on the empty interfaces, because they are the only difference returned from the lshw parsing function
[07:49] <TheMue> dimitern: so I'll compare the values and follow their usage bottom-up to see, how they influence the further process
[07:50] <TheMue> voidspace: thanks, yes. thankfully first feedback is, that it likely is full covered
[07:50] <voidspace> TheMue: *phew*, what a hassle :-/
[07:51] <wallyworld_> jam: with "juju status-history <unit>", I was going to default to say 10 recent status changes with an option -n arg. Also you would see agent and workload status interleaved
[07:51] <TheMue> voidspace: now it's "only" all that organizational stuff as well as looking/waiting for a new one
[07:51] <jam> wallyworld_: mark likes "hints"
[07:51] <wallyworld_> ok
[07:52] <wallyworld_> jam: with the legacy agent state, I added Stopped (so now we have Pending, Error, Started and Stopped
[07:52] <wallyworld_> jam: the status interleaving is so you could see that an agent ran X hook, which caused the unit to report Y status
[07:54] <wallyworld_> jam: we also firmly believe that when a hook fails, it should be the agent that is in error, not the workload (as the workload is most very likely still fine). Nonetheless, I followed the spec and we can discuss again if users complain
[08:16]  * dimitern is back
[08:17] <dimitern> TheMue, voidspace, thanks guys, sounds good - keep me posted :)
[08:18] <voidspace> dimitern: fix for upgrade on 1.23 nearly ready for review - just pushing it now
[08:19] <dimitern> voidspace, awesome!
[08:28] <jam> wallyworld_: "juju status-history" sounds good to me, can you put it into the doc as you've developed it?
[08:28] <wallyworld_> jam: haven't put fingers to keyboard yet; that's how i propose to do it, will update doc
[08:28] <jam> wallyworld_: for the workload, we're certainly in a situation where "we have no idea what is going on" once we're in an error state. So there are at least questions about what is the smallest lie
[08:29] <wallyworld_> jam: agreed to an extent. we can see how it unfolds once in the hands of users. william has very strong views
[08:32] <dimitern> voidspace, I assume your latest PR includes both the worker and the fix?
[08:37] <voidspace> dimitern: ah yes
[08:37] <voidspace> dimitern: yeah, to fix on 1.23 I needed to base it off the release-address branch which has the worker
[08:37] <voidspace> dimitern: once that branch lands the diff will be smaller...
[08:37] <dimitern> voidspace, right, got it
[08:38] <dimitern> voidspace, reviewed
[08:39] <voidspace> dimitern: thanks
[08:39] <voidspace> dimitern: they're good comments - except for the very last one
[08:39] <dimitern> voidspace, oh yeah? :)
[08:40] <voidspace> dimitern: what do you mean by an extra test for a removed machine?
[08:40] <voidspace> dimitern: we have a test for a machine that doesn't exist
[08:40] <voidspace> dimitern: what's the difference between that and a removed machine?
[08:40] <dimitern> voidspace, no, that's fine then
[08:40] <dimitern> voidspace, I must've missed it
[08:40] <voidspace> dimitern: the SetUp code that creates dead addresses allocates them to a non-existent machine
[08:41] <dimitern> voidspace, right, all good then :)
[08:41] <voidspace> which simulates the situation we're guarding against
[08:41] <voidspace> I'll fix the other issues you pointed out, thanks
[08:41] <dimitern> voidspace, cheers
[08:41] <dimitern> dooferlad, hey
[08:41] <dooferlad> dimitern: hi
[08:42] <dimitern> dooferlad, I've realized while reviewing your update command that some bits of the spec are contradictory
[08:42] <dooferlad> dimitern: ah
[08:42] <dimitern> dooferlad, we shouldn't allow a space with no subnets
[08:42] <dimitern> dooferlad, and create has to be changed slightly to accept at least one subnet
[08:43] <dooferlad> dimitern: I was pondering this yesterday, but after reading the spaces and subnets sections of the spec several times it seemed that just implementing what it said was the way to go for now.
[08:43] <dimitern> dooferlad, that's due to the way I plan to implement spaces on AWS at least - using tags on subnets
[08:44] <dooferlad> dimitern: I agree that a space without subnets doesn't make any sense
[08:44] <voidspace> dimitern:  logger.Debugf("requested ReleaseAddress of address %q for unknown instance, ignoring", addr.Value)
[08:44] <dimitern> dooferlad, I agree - with nothing better to go with :) sure
[08:44] <voidspace> dimitern: will change to (ignoring) - I think that's your preferred style
[08:45] <dimitern> voidspace, I'd go with "release address for an unknown instance ID is a no-op, ignoring" ?
[08:45] <dimitern> voidspace, or (ignoring) at the end - both are fine
[08:46] <voidspace> dimitern: and not log the actual address?
[08:46] <dimitern> voidspace, ah, good point - please do :)
[08:47] <dimitern> release address %q ...
[08:47] <voidspace> done
[09:01] <dimitern> TheMue, voidspace, standup?
[09:04] <voidspace> dimitern: oops, omw
[09:17] <voidspace> dimitern: the forward port to trunk: http://reviews.vapour.ws/r/1358/diff/#
[09:19] <dimitern> voidspace, LGTM
[09:59] <natefinch> dimitern: saw your review on my WIP branch for ensure-availability converting machines... after looking at the code, I think I understand the problems with the API stuff, but want to make sure I'm going down the right path.
[10:00] <dimitern> natefinch, great, do you think we need to have a chat or?
[10:01] <dimitern> natefinch, I'm glad to help if you have things to ask as well
[10:01] <natefinch> dimitern: yeah, but I have a lightly sleeping baby next to me, so it'll need to be text. :)
[10:01] <dimitern> natefinch, sure :)
[10:02] <natefinch> dimitern: So, I presume I should be using UnitsWatcher to watch a machine, instead of rolling my own, correct?
[10:02] <natefinch> dimitern: wayne wrote the watcher code for this, and I think didn't realize UnitsWatcher was also for watching machines (it took me a while to figure that out too... actually just got lucky seeing a comment about how it also worked for machines)
[10:03] <dimitern> natefinch, hmm UnitsWatcher IIRC reports changes to a service's units
[10:04] <dimitern> natefinch, i might be missing some context
[10:04] <natefinch> dimitern: it seems to work on machines and services in practice, plus the code just takes a tag
[10:04]  * dimitern revisits the PR
[10:05] <natefinch> dimitern: you were pretty vague about what was wron with the code, so maybe you should start by explaining what you meant: http://reviews.vapour.ws/r/1299/
[10:05] <natefinch> :)
[10:06] <dimitern> natefinch, right, sorry - I tried to explain 3 times, but each time my connection dropped or something else was wrong, so I lost 3 separate summaries (growing shorter with frustration)
[10:06] <natefinch> ha, I know how it is.
[10:06] <dimitern> natefinch, yeah, so let me have another look now and will explain
[10:07] <natefinch> dimitern: thanks
[10:07] <natefinch> dimitern: I think I have an idea, after looking at the other API endpoints, but want to make sure I'm on the right track.
[10:08] <dimitern> natefinch, ok, so one thing is definitely about using the error result for "fatal" errors, while using the Error field of the result structs for per-tag issues (e.g. failed to start watching machine X's units)
[10:08] <dimitern> natefinch, that's in the client-side api methods
[10:09] <dimitern> natefinch, and checking that if you asked for 1 tag you got 1 result and there wasn't an error in the result's Error field
[10:09] <dimitern> natefinch, this applies to both client methods
[10:11] <natefinch> dimitern: ok, yeah, trhat's what I was thinking
[10:11] <dimitern> natefinch, on the server-side, it seems you are taking a bunch of args (tags), but in some cases returning on error with one of them (e.g. ParseMachineTag failed), rather than continuing
[10:11] <mattyw> dimitern, ping when you have a moment
[10:11] <dimitern> mattyw, sure, in a little bit
[10:11] <mattyw> dimitern, sure, I'll ping you some stuff for you to take a look at at your convenience if that's ok?
[10:11] <dimitern> mattyw, no worries
[10:12] <dimitern> natefinch, also server-side - it's more common to have a getAuthFunc which returns an AuthFunc(tag) and checks permissions, rather than using the authorizer directly
[10:13] <dimitern> natefinch, the JobsResult struct should have a Error field like other result structs
[10:14] <natefinch> dimitern: yep, ok.  This is all exactly what I just fixed up yesterday afternoon... I had hoped that's what you meant.  Of course now that I've fixed it all, it doesn't work anymore, but I'm sure there's just a dumb bug somewhere :)
[10:15] <dimitern> natefinch, finally, please use names.MachineTag rather than string client-side; the rest are trivial formatting things
[10:15] <natefinch> dimitern: yep, cool.
[10:15] <dimitern> natefinch, oh, too bad - have you found out why it doesn't work?
[10:17] <natefinch> dimitern: not yet, need to put in some logging to see... the handler fires, so that much is working, maybe I mucked up the code that returns the jobs
[10:18] <dimitern> natefinch, I see, well ping me if I can help with something else
[10:18] <natefinch> dimitern: thanks
[10:25] <TheMue> dimitern: you yesterday wanted to take a look into the cloud-init.log. I still cannot reach it on a failing node, but I'm now on a working one and try to find aspects that are matching to the information generated based on the lshw results
[10:26] <TheMue> dimitern: do you had something special in mind to look at or only wanted to find some kind of errors or strange looking messages?
[10:26] <dimitern> TheMue, well, it's hard to understand what's going on without being familiar with cloud-init source - there's a lot of stuff in that log
[10:26]  * TheMue btw should create another vmaas environment for overlapping tests :/
[10:27] <TheMue> dimitern: it is, indeed
[10:27] <dimitern> TheMue, more useful usually is cloud-init-output.log
[10:27] <TheMue> dimitern: ah, thx for that hint, will switch
[10:27] <dimitern> TheMue, there could be some errors or warnings in cloud-init.log, but most of them are ok - unless it says something serious (e.g. "cannot start" or "failed ..")
[10:28] <TheMue> dimitern: ok
[10:29] <TheMue> dimitern: I'm currently adding debugs all the call stack upwards to see how the data influences the processing
[10:29] <dimitern> TheMue, +1
[10:30] <TheMue> dimitern: sometimes I wish my old smalltalk back, breaking the execution with live editing of the code and manipulation of the data
[10:31] <dimitern> TheMue, or even a proper debugger with breakpoints and patching the code on the fly? :)
[10:34] <TheMue> dimitern: so far no debugger for static languages I've seen during my travel through programming languages had the same simplicity and power of a Smalltalk environment
[10:37] <dimitern> TheMue, I haven't used it, but I believe you :)
[10:38] <TheMue> dimitern: it's a wonderful language as well as environment (everything is in one so called image). but sadly it doesn't scale on multicore, everything is single-process :(
[10:39] <dimitern> TheMue, I see - well, multicore CPUs where not that common at the time I guess
[10:43] <TheMue> dimitern: exactly, but multicore isn't so young anymore too, so one could at least have added this as a feature to the VM and then the according code. maybe I should port it to the Erlang VM, like LFE is doing it for LISP :D
[10:43] <dimitern> :)
[10:44] <TheMue> but then I would need a time compressor before, putting 48h into a day
[10:51] <tasdomas> I'm seeing test failures like this when I run tests in juju master:
[10:51] <tasdomas> http://paste.ubuntu.com/10717600/
[10:51] <tasdomas> the weird part is that I'm getting these with go1.4, go1.3 and go1.2.2
[10:52] <tasdomas> they seem to be an ordering issue
[10:52] <tasdomas> but as far as I can tell, nobody can reproduce them
[10:52] <tasdomas> could I have messed something up in my environment?
[10:55] <natefinch> first off, only use 1.2.2 .. we don't officially support newer versions (and I know there's a problem at least with 1.4)
[10:55] <mup> Bug #1439112 was opened: cmd/juju DeploySuite.TestUpgradeCharmDir fails on windows <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1439112>
[10:55] <dimitern> tasdomas, is than on master?
[10:56] <tasdomas> dimitern, yes
[10:56] <tasdomas> dimitern, I've been able to reproduce that with a clean checkout of juju (go get github.com/juju/juju) in an empty GOPATH
[10:56] <dimitern> tasdomas, and is it random or you can repro it reliably?
[10:57] <tasdomas> dimitern, can reproduce it every single time
[10:58] <dimitern> tasdomas, ok, that's good - we had reports about those tests but they were unreliably reproducible
[10:59] <dimitern> tasdomas, it's not an ordering issue as there are not maps involved there at all, something else is wrong
[11:00] <natefinch> dimitern: it could still be an ordering issue :)
[11:00] <tasdomas> dimitern, well, if I can help with solving this let me know
[11:00] <natefinch> dimitern: it's a slice :)
[11:00] <dimitern> natefinch, with slices?
[11:00] <dimitern> natefinch, :) ah, right
[11:02] <dimitern> tasdomas, can you replace that gc.DeepEquals with jc.DeepEquals, run the tests a few times and paste the result please?
[11:03] <tasdomas> dimitern, sure
[11:05] <tasdomas> dimitern, still a failure
[11:05] <tasdomas> http://paste.ubuntu.com/10717644/
[11:06] <dimitern> tasdomas, ok, thanks - just to scratch off the easy solution - try one more time with jc.SameContents instead of jc.DeepEquals?
[11:07] <tasdomas> dimitern, sure
[11:07] <natefinch> I think the lists are just different
[11:08] <dimitern> natefinch, they are the same length, so ISTM the order is the only issue
[11:09] <dimitern> tasdomas, could it be you're using a more recent mongodb?
[11:10] <tasdomas> dimitern, 2.6.3
[11:10] <tasdomas> dimitern, SameContents panics
[11:11] <dimitern> tasdomas, hmm I thought it could be mongo related; oh - is the panic about the map field ExtraConfig?
[11:11] <tasdomas> dimitern, not as far as I can tell: ➜  juju  (git)-[master]-% go test github.com/juju/juju/api/networker
[11:11] <tasdomas> ----------------------------------------------------------------------
[11:11] <tasdomas> PANIC: networker_test.go:204: networkerSuite.TestMachineNetworkConfig
[11:11] <tasdomas> [LOG] 0:00.011 DEBUG juju.environs.configstore Made dir /tmp/check-8674665223082153551/2/home/ubuntu/.juju/environments
[11:11] <tasdomas> [LOG] 0:00.099 DEBUG juju.environs.configstore writing jenv file
[11:11] <tasdomas> [LOG] 0:00.099 DEBUG juju.environs.configstore writing jenv file to /tmp/check-8674665223082153551/2/home/ubuntu/.juju/environments/dummyenv.jenv
[11:11] <tasdomas> [LOG] 0:00.099 DEBUG juju.environs.tools reading v1.* tools
[11:11] <tasdomas> [LOG] 0:00.099 INFO juju.environs.testing uploading FAKE tools 1.24-alpha1-trusty-amd64
[11:11] <tasdomas> [LOG] 0:00.101 INFO juju.environs.testing uploading FAKE tools 1.24-alpha1-precise-amd64
[11:11] <tasdomas> [LOG] 0:00.101 INFO juju.environs.testing uploading FAKE tools 1.24-alpha1-utopic-amd64
[11:11] <tasdomas> [LOG] 0:00.102 DEBUG juju.environs.tools no architecture specified when finding tools, looking for any
[11:12] <tasdomas> [LOG] 0:00.102 DEBUG juju.environs.tools no series specified when finding tools, looking for any
[11:12] <tasdomas> [LOG] 0:00.103 DEBUG juju.environs.simplestreams searching for metadata in datasource "existing metadata"
[11:12] <tasdomas> [LOG] 0:00.103 DEBUG juju.environs.simplestreams fetchData failed for "file:///tmp/check-8674665223082153551/5/tools/streams/v1/index2.sjson": stat /tmp/check-8674665223082153551/5/tools/streams/v1/index2.sjson: no such file or directory
[11:12] <tasdomas> [LOG] 0:00.103 DEBUG juju.environs.simplestreams streams/v1/index2.sjson not found, trying legacy index file
[11:12] <tasdomas> [LOG] 0:00.103 DEBUG juju.environs.simplestreams fetchData failed for "file:///tmp/check-8674665223082153551/5/tools/streams/v1/index.sjson": stat /tmp/check-8674665223082153551/5/tools/streams/v1/index.sjson: no such file or directory
[11:12] <tasdomas> [LOG] 0:00.103 DEBUG juju.environs.simplestreams cannot load index "file:///tmp/check-8674665223082153551/5/tools/streams/v1/index.sjson": invalid URL "file:///tmp/check-8674665223082153551/5/tools/streams/v1/index.sjson" not found
[11:12] <tasdomas> [LOG] 0:00.103 DEBUG ju
[11:12] <tasdomas> dimitern, sorry - wrong buffer
[11:12] <tasdomas> http://paste.ubuntu.com/10717662/
[11:13] <dimitern> tasdomas, right - yeah it's not obvious, but it is because the map field makes the struct unhashable
[11:14] <dimitern> tasdomas, ok, so I think we should definitely file a bug about this and tackle it at some point, but because it only happens with a more recent mongod, it's not a serious issue (I mean of the "fix asap" kind)
[11:14] <tasdomas> dimitern, understood
[11:14] <tasdomas> dimitern, I'll file the bug
[11:15] <tasdomas> dimitern, what version of mongo is preferrable ?
[11:15] <dimitern> tasdomas, great, thanks for your help
[11:15] <dimitern> tasdomas, the one we're using is 2.4.9 from trusty
[11:15] <dimitern> tasdomas, there are a few known issues with 2.6
[11:16] <dimitern> (more juju incompatibilities than mongo issues)
[11:42] <dimitern> dooferlad, I've updated the network spec to clarify a few bits around managing spaces, including the changes we discussed around update (added rename as a separate command)
[11:49] <jam> wallyworld: are you still around?
[11:50] <wallyworld> yeah
[11:52] <jam> wallyworld: well, I'm half tempted to just say "go to bed" :)
[11:52] <jam> wallyworld: but the other half of me has a question about the simplestreams tuff
[11:52] <jam> stuff
[11:53] <jam> wallyworld: specifically, in environs/simplestreams/simplestreams.go GetProductsPath
[11:53] <jam> we filter by datatype
[11:53] <jam> and then we return "errors.NotFoundF" if there are no entries that have the same DataType
[11:53] <jam> we alsor return NotFoundf if the cloud spec doesn't match
[11:54] <jam> but we return NoMatchingProductsError
[11:54] <jam> for the other circumstances
[11:54] <jam> (if the only thing that doesn't match is ProductId)
[11:54] <wallyworld> jam: ?
[11:55] <jam> wallyworld: so I'm curious why finding an index file but not finding cloudspec or datatype is considered NotFound
[11:55] <jam> but finding one that just doesn't have the product is considered NoMatchingProducts
[11:55] <jam> *specifically* because we special case NotFound in how we log errors
[11:56] <wallyworld> no particular reason, except maybe bad implementation. maybe we should have used NoMatchingClouds and NoMatchingDataTypes errors
[11:57] <wallyworld> we could or should fix it i guess
[11:57] <jam> wallyworld: well if we get noMatchingProductsError, then we do something weird and log it, but then fall through
[11:57] <jam> so you still get an items list instead of nil
[11:58] <wallyworld> from memory, that could be because the products might still be found in a different products file
[11:58] <wallyworld> but i'd have to look at the code, it's been a while
[11:58] <jam> wallyworld: sure, I have the feeling things have evolved, because the next higher loop keeps searching on any error
[11:59] <wallyworld> the implementation has grown organically to match what's served by cloudimages
[11:59] <wallyworld> it needs to be looked at fresh and a proper V2 refactoring done
[12:00] <wallyworld> mirrors were also add after V
[12:00] <wallyworld> 1
[12:00] <wallyworld> so it got very messy
[12:00] <jam> sure
[12:01] <wallyworld> i do wish it were rewritten
[12:07] <jam> wallyworld: http://reviews.vapour.ws/r/1362/
[12:09] <wallyworld> looking
[12:12] <wallyworld> jam: \o/
[13:06] <tasdomas> dimitern, ping?
[13:09] <dimitern> tasdomas, pong
[13:09] <tasdomas> dimitern, a uniter API question, if I may
[13:10] <dimitern> tasdomas, sure
[13:10] <tasdomas> dimitern, so the uniter uses only the uniter API client, I need to modify the way metrics are added to state (from the unit)
[13:11] <tasdomas> should I override the existing method in the uniter API (and APIserver) or should I introduce a different facade?
[13:11] <tasdomas> s/different/new
[13:11] <dimitern> tasdomas, it depends on how compatible the api changes is - if it's adding a new method or adding args to an existing one, it's fine to do it directly in the current uniter api facade
[13:12] <tasdomas> dimitern, what if it's changing the params of an existing one?
[13:12] <dimitern> tasdomas, if it changes behavior or removes args/methods in a backward-incompatible way - a new facade version of the uniter api is needed
[13:12] <tasdomas> dimitern, got it - thank you
[13:13] <dimitern> tasdomas, np - it will help to have a look at the change, when ready
[13:14] <tasdomas> dimitern, it's related to api/metricstorage (and apiserver/metricstorage) - I introduced this as a new facade, but now see the error of my ways and will move the functionality to the uniter api client
[13:15] <dimitern> tasdomas, ok, ISTM it will be fine to add the new methods to the existing uniter api facade (v1 IIRC)
[13:16] <dimitern> tasdomas, however, keep in mind when implementing the client-side methods that the apiserver might be older and can return CodeNotImplemented error for the new methods
[13:19] <tasdomas> dimitern, thanks
[13:20] <tasdomas> dimitern, now I remember why I added this as a new facade - fwereade suggested I do this to avoid incrementing the version of the uniter facade (which is largish) due to this small change
[13:22] <dimitern> tasdomas, I see, there's no need to increment the version if only new (and not critical) methods are added (i.e. if the client code is prepared to handle gracefully the case when the new methods are missing)
[13:31]  * dimitern steps out for a bit
[14:01] <mup> Bug #1439204 was opened: environs/tools does not cleanup in the right order <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1439204>
[14:25]  * dimitern is back
[14:28] <rogpeppe> dimitern: what's the current standard way of creating a juju review request?
[14:29] <jam> katco: fairly trivial review: http://reviews.vapour.ws/r/1363/diff/#
[14:29] <katco> jam: k i'll take a look
[14:29] <dimitern> rogpeppe, just propose a branch against one of the branches in juju/juju - a RB diff should be created automatically
[14:29] <jam> rogpeppe: generally propose a github merge
[14:29] <rogpeppe> dimitern: ah, i'll just make a PR then, thanks
[14:30] <jam> rogpeppe: rb notices and then updates reviews.vapour.ws and updates github with a link to it.
[14:30] <katco> jam: wow you weren't kidding :)
[14:30] <rogpeppe> jam: it would be nice if the README had contribution guidelines, i guess
[14:30] <katco> jam: +1
[14:30] <jam> rogpeppe: probably needs to be updated, I agree.
[14:30] <jam> I thought we had CONTRIBUTING
[14:31] <natefinch> we do... the code review section is kind of a wall of text, though
[14:32] <natefinch> and links to https://github.com/juju/juju/blob/master/doc/contributions/reviewboard.md
[14:32] <natefinch> which has more specifics
[14:33] <rogpeppe> jam: thanks - i hadn't seen that
[14:33] <rogpeppe> jam: the README should probably link to it
[14:35] <rogpeppe> jam: this particular PR is interesting from the p.o.v. of the recent discussion on juju-dev
[14:36] <rogpeppe> jam: as it uses an external type in the API serialisation
[14:36] <rogpeppe> jam: i'm not convinced that it's wrong here though
[14:36] <rogpeppe> jam: i'd be interested in your thoughts
[14:37]  * katco does happy dance
[14:37] <katco> i'm feeling good today :)
[14:38]  * rogpeppe is pleased for katco
[14:38] <jam> rogpeppe: I'd say I generally prefer being more explicit on API stuff because it is far too easy to stay compatible with yourself and break everyone else in the world
[14:38] <katco> rogpeppe: i've had a sinus infection for the past few weeks. haven't been able to do much of anything =/
[14:39] <rogpeppe> katco: ew
[14:39] <rogpeppe> katco: i also had a horrible flu thing last week, still feeling pretty knacked
[14:39] <katco> rogpeppe: seems everyone's been sick
[14:39] <rogpeppe> katco: yeah
[14:39] <katco> rogpeppe: and we'll bring it all to nuremberg! :)
[14:39] <rogpeppe> katco: i hadn't had something like this in decades
[14:39] <rogpeppe> katco: joy!
[14:40] <rogpeppe> katco: sinus thing sounds 'orrible
[14:40] <katco> rogpeppe: yeah it's been a few years since i've been knocked on my butt like this
[14:40] <katco> rogpeppe: i suppose it's a good reminder to appreciate the good times :)
[14:40] <rogpeppe> katco: well, continue to take it easy! yeah.
[14:40] <katco> rogpeppe: you too sir! see you in nuremberg i hope
[14:40] <rogpeppe> katco: deffo
[14:41] <rogpeppe> jam: the issue in this particular case is that we want to transfer a macaroon over the wire, and the macaroon wire format is well defined, and implemented inside the macaroon package
[14:42] <rogpeppe> jam: there's actually no easy way to use a different format except by dependending on precisely the format we're trying to avoid depending on
[14:42] <jam> rogpeppe: isn't a serialized macaroon just a series of lines with a final hash?
[14:42] <rogpeppe> jam: no
[14:43] <rogpeppe> jam: and even if it was, i wouldn't want to duplicate all the (very well tested) serialisation code that's in the macaroon package
[14:43] <jam> rogpeppe: so I'm saying that the API can be "you get a string" which you then parse as a macaroon. not "you get a Macaroon object"
[14:44] <rogpeppe> jam: how is that any better?
[14:44] <jam> rogpeppe: fwiw I was confusing M.serialize() with M.inspect(). The latter being the line-by-line with a final sig
[14:44] <rogpeppe> jam: there's a JSON serialisation format too
[14:45] <rogpeppe> jam: (which is what we use in all the other APIs where we're using macaroons)
[14:47] <jam> rogpeppe: interestingly JSON serialization isn't supported by the android lib
[14:47] <rogpeppe> jam: which android lib?
[14:48] <jam> rogpeppe: https://android-arsenal.com/details/1/914
[14:48] <jam> rogpeppe: I was searching for "macaroon JSON serialization" and one of the first hits was "JSON not supported" :)
[14:48] <rogpeppe> jam: i'm not that surprised. the libmacaroons guy doesn't like the JSON serialisation much, although i took the format from his code.
[14:48] <jam> rogpeppe: so for something like this where there is an official format, it does feel like you should use it. It is different when it is one of *our* formats
[14:49] <jam> rogpeppe: the short text from google references you
[14:49] <jam> Merge branch '002-fix-json-serialization' of https://github.com/rogpe
[14:50] <rogpeppe> jam: yeah, i fixed libmacaroons so it actually worked
[14:51] <jam> rogpeppe: so is http://godoc.org/gopkg.in/macaroon.v1 just a wrapper around lib or you were just looking at interop ?
[14:51] <rogpeppe> jam: i was just looking at interop
[14:51] <rogpeppe> jam: (i wrote it independently to start with, but then converged the serialisation formats)
[14:54] <rogpeppe> jam, dimitern, katco, anyone else: anyway, a review would be appreciated if you have a few moments to look at it: http://reviews.vapour.ws/r/1364/diff/#
[14:54] <natefinch> the macaroon stuff is versioned with gopkg.in, right?  So in theory, it should never break compatibility unless juju updates to including a new version
[14:54] <ericsnow> fwereade: ping
[14:54] <jam> natefinch: there is code compat, and there is wire-protocol compat
[14:54] <jam> technically gopkg.in is only code compat I thought
[14:55] <katco> rogpeppe: i'll get to it in a few, working down the queue (but i'll skip to yours next)
[14:55] <natefinch> jam: I would hope that would be part of the contract
[14:55] <rogpeppe> katco: ta!
[14:55] <jam> certainly that was the discussion from something else, though the thought was that code compat would iterate faster than wire compat
[14:56] <natefinch> jam: I could see how v1 and v2 could share wire compatibility... but I can't see how v1 and v1.1 could not... at least, I think most people would assume they'd have compatible wire formats
[14:57] <jam> rogpeppe: so one problem with your proposal is that you are changing what is supported by the API without changing the version numbers. So new juju clients can't tell that the server doesn't support it, right?
[14:57] <jam> rogpeppe: you really do need a version bump
[14:58] <dimitern> rogpeppe, why macaroon-bakery.v0 and not v1? I thought v0 was only used as a fallback when no versions exist
[14:58] <jam> so that juju 1.25 can say "oh,I'm talking to Client v0, which doesn't support macaroon based conversations"
[14:58] <rogpeppe> jam: that seems right
[14:58] <rogpeppe> jam: where do i bump the version?
[14:58] <katco> rogpeppe: jam: natefinch: http://semver.org/
[14:58] <rogpeppe> dimitern: because we haven't decided that the bakery API is stable yet
[14:59] <rogpeppe> dimitern: i think it's getting close though
[14:59] <dimitern> rogpeppe, so imports will use v1 for the release of 1.23 then?
[14:59] <rogpeppe> dimitern: there is no v1 yet
[14:59] <rogpeppe> dimitern: i have one moderately big change in the pipeline and then we might consider bumping to v1
[15:00] <dimitern> rogpeppe, please do, as releasing juju depending on unstable imports is a no-no
[15:00] <dimitern> rogpeppe, apart from that LGTM I think
[15:01] <rogpeppe> dimitern: most of juju's imports are unstable...
[15:01] <dimitern> rogpeppe, none of the versioned ones are, except charm.v5-unstable, but we agreed to bump that to v5 before the release
[15:02] <rogpeppe> dimitern: most of juju's imports aren't versioned
[15:02] <dimitern> rogpeppe, I believe you got my point, come on :)
[15:02] <rogpeppe> dimitern: not really
[15:03] <rogpeppe> dimitern: i don't see there's a difference between an unversioned import and an import with a v0 version
[15:03] <rogpeppe> dimitern: and i don't think you'd be complaining if the import path was github.com/juju/bakery
[15:04] <dimitern> rogpeppe, the difference is simple: the unversioned import is not yet versioned, the unstable but versioned one is imported as unstable as a conscious choice
[15:04] <rogpeppe> dimitern: they're exactly the same. neither is stable.
[15:05] <rogpeppe> dimitern: i used .v0 just so it would be trivial to update with govers when the time comes to bump the version
[15:05] <dimitern> rogpeppe, ok, that's fine, I'm just saying we'll need to bump it before releasing a stable 1.23
[15:06] <rogpeppe> dimitern: i don't think that should be a necessary prereq
[15:06] <rogpeppe> dimitern: although i'd like to do it anyway
[15:08] <dimitern> rogpeppe, cool
[15:12] <natefinch> technically, with godeps, we don't really need versioning.  The version is just a nice human-readably suggestion.  For example... 1.23 says it's importing charm.v5-unstable, but it's really importing a commit from the 1.23 branch
[15:13] <natefinch> note that I do still think versioning is good and that we should continue to use it.
[15:17] <natefinch> we need juju status -f
[15:17] <natefinch> so it'll update the status whenever there's a change, and I don't have to spam juju status constantly
[15:31] <benji> "watch juju status" can be a reasonable substitute
[15:33] <natefinch> benji: yeah that works :)
[16:06] <voidspace> dimitern: so, there's a comment in extractInterfaces about skipping disabled network interfaces
[16:06] <voidspace> dimitern: but it doesn't skip them
[16:06] <voidspace> dimitern: if it did, life might be simpler...
[16:08] <dimitern> voidspace, oh, yeah - I noticed that one as well and glared at it :)
[16:08] <voidspace> dimitern: do we *need* to know the disabled interfaces?
[16:11] <dimitern> voidspace, we need to not try to use them as primary interfaces at least
[16:11] <dimitern> voidspace, that's coming from another bug around that code
[16:11] <voidspace> dimitern: well, if extract interfaces didn't return them there's a lot less chance we'd try and use them
[16:11] <voidspace> dimitern: although finding the actual bug would be good - maybe *as well*?
[16:11] <dimitern> voidspace, no, I mean we need to know whether they are disabled or not
[16:12] <dimitern> if possible
[16:12] <voidspace> dimitern: digging in still
[16:15] <dimitern> dooferlad, voidspace, PTAL http://reviews.vapour.ws/r/1365/ when you have a moment - juju subnet add cmd
[16:15] <dooferlad> dimitern: *click*
[16:16] <voidspace> also looking
[16:16] <dimitern> thanks guys
[16:25] <dooferlad> dimitern: I like CheckNumArgs - nice!
[16:27] <dimitern> dooferlad, it came along nicely yeah :)
[16:28] <dimitern> dooferlad, that helper is a candidate for factoring out in a common place later
[16:29] <dimitern> dooferlad, if you like, use it for spaces as well (just copying it for now I guess - for simplicity, then we'll move it somewhere)
[16:36] <dooferlad> dimitern: Yea, I was thinking of making use of it, though I was thinking of something like ParseArg(arg string, *parseFunction, *targetVariable) err {}
[16:36] <dooferlad> dimitern: Not sure if combining the two ideas just gets messy!
[16:38] <dooferlad> dimitern: also, it gets to a point where you are just writing code because it is a cool idea, rather than making progress. I think I am approaching that point :-)
[16:39] <dimitern> dooferlad, hah I know the feeling
[16:53] <dimitern> dooferlad, voidspace, and a final PR for today - juju subnet remove - https://github.com/juju/juju/pull/2014/ (the diff includes the previous PR - please ignore that)
[16:54] <TheMue> dimitern: what does " install -D -m 644 /dev/null '/var/lib/juju/nonce.txt'" of the userdata in cloudinit do?
[16:55] <dooferlad> TheMue: I believe that file is required for the machine to boot, but it doesn't need to contain anything. I ran into it a while ago.
[16:55] <dimitern> TheMue, it installs the nonce file for the machine agent
[16:56] <dimitern> dooferlad, TheMue, the nonce is used to verify the machine agent trying to login as "machine-X" is indeed allowed to do so (the provisioner generates the nonce when it starts an instance for machine-X)
[16:56] <dimitern> ok folks - time for me to go
[16:57] <TheMue> dimitern: I looked the whole bootstap up and down
[16:57] <TheMue> dimitern: and everything really looks similar, so I'll now check the content of the nonce
[16:57] <dooferlad> TheMue: There are plenty of people having problems because that file is missing (https://www.google.co.uk/search?q=var%2Flib%2Fjuju%2Fnonce.txt%2520missing&gws_rd=ssl#q=var%2Flib%2Fjuju%2Fnonce.txt+missing)
[16:58] <dimitern> TheMue, I'd appreciate if you summarize your findings in a mail please, and let's discuss it tomorrow?
[16:58] <TheMue> dimitern: yep, will do
[16:58] <dimitern> TheMue, thanks!
[17:14] <alexisb> katco, you around?
[17:15] <katco> alexisb: i am!
[17:20] <alexisb> dooferlad, voidspace if you all have updates on the 1.23 networking bugs please provide them to me before you eod
[17:20] <alexisb> we need to make a call on releasing 1.23 by end of my day
[17:21] <jw4> alexisb: there is a high priority bug in upgrades that I may have a fix for today
[17:21] <jw4> (1.23)
[17:21] <jw4> 1438489
[17:22] <jw4> bug 1438489 (sorry for the stutter)
[17:22] <mup> Bug #1438489: juju stop responding after juju-upgrade <upgrade-juju> <juju-core:Triaged by johnweldon4> <https://launchpad.net/bugs/1438489>
[17:24] <alexisb> jw4, nice
[17:24] <alexisb> if we want to included it for beta3 it needs to land today
[17:24] <alexisb> otherwise it will go in a point release
[17:24] <jw4> alexisb: hopefully yep, although there seems to be another bug right behind it if this one clears
[17:25] <jw4> alexisb: in a different part of code but also to do with upgrades
[17:38] <mup> Bug #1439204 changed: environs/tools does not cleanup in the right order <test-failure> <windows> <juju-core:Fix Released by jameinel> <https://launchpad.net/bugs/1439204>
[18:13] <voidspace> alexisb: two bugs of the three I was working on done and fix committed
[18:13] <voidspace> alexisb: third in progress, partial fix but not the main issue, done - still hopeful to land tomorrow
[18:15] <voidspace> alexisb: can go into the next release though - or I can try and land the simple (partial) fix we have immediately
[18:22] <alexisb> voidspace, ack, thank you!
[18:28] <natefinch> ericsnow, wwitzel3: either of you still working on 1.23 bugs?
[18:32] <mup> Bug #1435413 changed: rename "juju backups restore" to "juju backups recover" <backup-restore> <juju-core:Won't Fix by ericsnowcurrently> <juju-core 1.23:Won't Fix by ericsnowcurrently> <https://launchpad.net/bugs/1435413>
[18:52] <ericsnow> natefinch: wrapping up some vmware stuff and there's still the 1 GCE bug that needs attention
[18:53] <natefinch> ericsnow: sounds like they're trying to get 1.23 cut tonight.   vmware is still a ways off, right?  Seems like GCE should be the priority
[18:56] <ericsnow> natefinch: yep
[19:15] <voidspace> g'night all
[19:33] <natefinch> ...and now that my HA stuff is ready, there's no graduated reviewers around
[19:33] <natefinch> le sigh
[19:33] <katco> natefinch: o/
[19:33] <katco> natefinch: finishing up a review and i'll TAL
[19:33] <natefinch> oh, shit,  I forgot some of the "newbies" are graduated :)
[19:34] <katco> natefinch: haha yeah :)
[19:34] <natefinch> katco: much appreciated - http://reviews.vapour.ws/r/1299/
[19:34] <katco> boy reviews are like laundry.... just when you think you might be done and you can work on something else :p
[19:40] <natefinch> katco: haha, yep
[19:41] <natefinch> katco: ahh crud, I have to fix some of the comments... I went from one type to another and back to the first, but I forgot to put the comments back to the first type
[19:41] <katco> natefinch: if you want to return the favor, fairly small: http://reviews.vapour.ws/r/1367/
[19:41] <katco> natefinch: ok np, lmk when it's ready
[19:42] <natefinch> katco: ok, fixed
[19:43] <katco> k looking
[19:43] <natefinch> katco: looking at your review
[19:43] <katco> natefinch: ty
[19:43] <natefinch> katco: The file 'featuretests/package_gccgo_test.go' (r3b6a6c8) could not be found in the repository
[19:43] <katco> natefinch: right i deleted it :)
[19:44] <katco> natefinch: i don't know why RB has problems with that
[19:44] <natefinch> katco: heh ok
[19:44] <katco> natefinch: https://github.com/juju/juju/pull/2015/files#diff-765f9c55678c118cef1ce099ec147738L1
[19:44] <katco> natefinch: that's what it was
[19:49] <ericsnow> any networking folks around?
[19:49] <ericsnow> I'm seeing the addressworker in a restart loop around this log entry:  2015-04-01 19:47:52 ERROR juju.worker runner.go:219 exited "addresserworker": environment does not support networking
[19:51] <natefinch> katco: ship it
[19:51] <katco> natefinch: woot! ty!
[19:51] <natefinch> katco: welcome :)
[19:52] <natefinch> I swear HA used to work on local, but it totally doesn't now.
[19:56] <mup> Bug #1439364 was opened: error in logs: environment does not support networking <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1439364>
[19:59] <mup> Bug #1439364 changed: error in logs: environment does not support networking <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1439364>
[20:08] <mup> Bug #1439364 was opened: error in logs: environment does not support networking <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1439364>
[20:27] <natefinch> katco: how's the review coming?
[20:27] <katco> natefinch: good. my wife came home so i got interrupted for a few
[20:28] <natefinch> katco: no problem. Sorry to pester you, just need to get it checked in today (if possible), and I'm running out of day.
[20:28] <katco> natefinch: np, reviewing as fast as i can :)
[20:32] <natefinch> katco: thank you, I appreciate it.
[20:35] <mup> Bug #1439375 was opened: logger in worker/instancepoller is named "juju.worker.instanceupdater" <juju-core:New> <https://launchpad.net/bugs/1439375>
[20:41] <katco> natefinch: i forget, do mgo update operations need an exists assert?
[20:42] <natefinch> katco: shouldn't... mongo is update == upsert  but I don't know if mgo somehow separates the two
[20:42] <katco> k
[20:43] <katco> natefinch: i have an initial review posted
[20:43] <katco> natefinch: i need to look at the api stuff again... that stuff is so tedious
[20:45] <natefinch> katco: yeah, the API stuff is super annoying.  totally needs to be either generated or somehow refactored to take out all the boilerplate
[20:45] <katco> natefinch: definitely
[20:47] <katco> natefinch: i see dimiter already commented on the api stuff, do you feel like he gave that a good once-over?
[20:47] <katco> natefinch: b/c the logic looks fine to me, it's the nuances of our RPC implementation that i'd really have to go over
[20:48] <natefinch> katco: yes, but I had to make changes to it in response to what he asked for, so really, he sort of needs to approve my changes. But of course, they weren't done until he went offline. le sigh.  Maybe I can get some of the NZ/AUS guys to review the API stuff and get it merged in later tonight.
[20:49] <katco> natefinch: that might be best... i don't want to just rubber stamp it.
[20:49] <katco> natefinch: the logic looks fine though
[20:49] <katco> natefinch: axw is the next OCR
[20:50] <natefinch> katco: no problem.  I appreciate the time you put into it.
[20:50] <katco> natefinch: today, it's my job ;)
[21:01] <natefinch> axw: when you get on, if you could review http://reviews.vapour.ws/r/1299/ I would really appreciate it, especially the API stuff.
[21:01] <katco> natefinch-dinner: i will ask him in our standup as well
[21:01] <natefinch-dinner> katco: thanbks
[21:01] <ericsnow> wwitzel3: I think you may be right
[21:02] <katco> natefinch-dinner: enjoy dinner
[21:35] <cherylj> ericsnow, thumper:  Could I get a quick license review?  https://github.com/juju/jujusvg/pull/24
[21:35] <cherylj> cmars ^^
[21:41] <mup> Bug #1439398 was opened: GCE low-level RemoveInstances fails if firewalls are not found <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1439398>
[21:46] <ericsnow> cherylj: that license exception is the same one we have everywhere else, right?
[21:46] <cherylj> ericsnow: yes, I copied it from another repo
[21:46] <ericsnow> cherylj: cool, LGTM
[21:46] <cherylj> thanks!
[22:15] <jw4> I'm pulling my hair out with this local upgrade test... it seems to not be picking up my updated code on my 1.23 branch... any tips for iterating on upgrade testing?
[22:17] <jw4> ah, strike that, my code is getting pulled in... there was a problem in my assumptions about upgrades
[22:18] <jw4> it looks like my upgrade steps have to manually iterate units on the machine, they don't get passed in as part of the upgrade operation
[22:35] <ericsnow> just got bit by the fact that the pointer to a loop var does not change across iterations (which makes sense *now*)
[22:42] <ericsnow> could I get a review on http://reviews.vapour.ws/r/1370/?
[22:42] <ericsnow> katco: ^^^
[22:55] <ericsnow> axw: ping
[22:56] <ericsnow> axw: I guess you won't be one quite yet :)
[22:56] <ericsnow> s/one/on/
[22:57] <ericsnow> wallyworld: perhaps you could takea quick look at http://reviews.vapour.ws/r/1370/
[22:57] <wallyworld> ericsnow: in a meeting, will look soon
[22:57] <ericsnow> wallyworld: thanks!
[23:06] <axw> ericsnow: LGTM
[23:06] <ericsnow> axw: thanks
[23:06] <axw> natefinch-dinner: will take a look a bit later on
[23:07] <axw> np
[23:07] <ericsnow> axw: it was such a dumb bug :(
[23:07] <axw> ericsnow: yeah, that's an easy one to fall for
[23:38] <mup> Bug #1439447 was opened: tools download in cloud-init should not go through http[s]_proxy <cloud-installer> <landscape> <juju-core:New> <https://launchpad.net/bugs/1439447>