[00:09] <ericsnow> davecheney: would you mind taking another look at http://reviews.vapour.ws/r/346/?
[00:21] <davecheney> wallyworld_: yeah, i can log a bug
[00:21] <davecheney> there isn't much we can do about it
[00:21] <wallyworld_> yeah :-(
[00:22] <wallyworld_> except fix our test fixture maybe
[00:22] <davecheney> the problem is there is no signal that says "the test suite is over, and there are no more test suites"
[00:22] <davecheney> the last bit is missing
[00:54] <menn0> wallyworld_, thumper: here's the other part of the upgrade steps work. http://reviews.vapour.ws/r/399/
[00:54] <thumper> ack
[00:55] <wallyworld_> ok
[01:04] <davecheney> wallyworld_: https://bugs.launchpad.net/juju-core/+bug/1391353
[01:04] <mup> Bug #1391353: state: testing suite leaks ~35 mb (one mongodb database) per test run <juju-core:New> <https://launchpad.net/bugs/1391353>
[01:05] <wallyworld_> ta
[01:10] <davecheney> wallyworld_: i can think of two solutions, one that isn't possible with 1.4
[01:10] <davecheney> how severe do you think this is ?
[01:10] <davecheney> the other solution is a fair bit of work
[01:10] <wallyworld_> hmmm, well it fails out landing runs every so often
[01:11] <davecheney> nah, i think that is differnt
[01:11]  * wallyworld_ has to but foo figher tickets NOW, can't talk for a bit
[01:11] <davecheney> 35mb is peanuts compared to the 600 gb or something we get on /mnt on a c3 xlarge
[01:11] <wallyworld_> buy
[01:44] <waigani> yikes, big thunderstorm here, killed the power
[01:53] <davecheney> sooooo close
[01:53] <davecheney> one failing test in state
[01:53] <davecheney> and i can repropose my branch
[02:09]  * thumper is crossing his fingers for 100Mb symmetric fibre UFB
[02:14] <ericsnow> thumper: thanks for that feedback
[02:15] <thumper> np
[02:15] <ericsnow> thumper: even though davecheney has been made the point to me several times (thanks for your patience, Dave), it finally clicked for me today right before your review showed up :)
[02:15] <ericsnow> thumper: I'll be back online a bit later if you want to follow up
[02:15] <thumper> kk
[02:48] <menn0> thumper: I still can't get fibre here in central christchurch
[02:49] <menn0> thumper: parts of chch have it, but not our area yet
[02:50] <menn0> wallyworld_: cheers for the review
[02:50] <wallyworld_> np
[04:29] <ericsnow> thumper: about backups CLI
[04:30] <ericsnow> thumper: is it worth finding an alternative to "juju backups ..."?
[07:21] <dimitern> morning all
[07:28] <fwereade> dimitern, o/
[07:29] <dimitern> fwereade, heyhey
[07:32] <dimitern> fwereade, do we have a precedent for extending the meaning of an environment setting? I'm working on bug 1367863 and I'm thinking of changing "use-floating-ip" from boolean to string with "always|never|state-server" values
[07:32] <mup> Bug #1367863: openstack: allow only bootstrap node to get floating-ip <hp-cloud> <landscape> <openstack-provider> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1367863>
[07:33] <dimitern> fwereade, or would you add a new setting instead?
[07:45] <fwereade> dimitern, hum, let me think
[07:46] <fwereade> dimitern, I *think* our upgrades are such that we could do that sanely now -- although we'd need to be prepared for the bools and able to convert them
[07:47] <dimitern> fwereade, yeah, so parsing needs to be a bit smarter
[07:47] <fwereade> dimitern, but would you double-check with menn0? I'm pretty sure everything'll go into upgrading mode before the changes start, and won't react until they have themselves upgraded
[07:48] <dimitern> fwereade, sure
[07:49] <dimitern> fwereade, upgrades do not change envs.yaml though.. how about when you try to upgrade an existing openstack environment using "use-floating-ip": true? After the upgrade it will effectively be the same as "use-floating-ip": "always", and if we change it to something else - then what?
[07:50] <dimitern> fwereade, it feels like this setting should be immutable once you bootstrap
[07:50] <fwereade> dimitern, well, envs.yaml becomes irrelevant once there's an env up
[07:51] <fwereade> dimitern, well *really* we should be assigning and unassigning them based on that setting and on services being exposed and unexposed
[07:51] <fwereade> dimitern, I don't suppose openstack does the floating-ip-on-expose thing yet though?
[07:52] <dimitern> fwereade, eventually yes, but for now I think it's safer not to allow changing the setting after bootstrap
[07:52] <dimitern> fwereade, yeah, it doesn't do it now
[07:53] <fwereade> dimitern, ok it mainly depends on the context of who needs it
[07:54] <dimitern> fwereade, so how about "always|never|state-server|auto" - the last meaning both "state-server" and add FIP on expose, remove it on unexpose?
[07:54] <fwereade> dimitern, that seems likely sanest, especially considering the reference to bug 1287662 in there
[07:54] <mup> Bug #1287662: Add floating IP support <addressability> <canonical-is> <cts-cloud-review> <openstack-provider> <juju-core:Triaged by niedbalski> <https://launchpad.net/bugs/1287662>
[07:54] <dimitern> fwereade, it's a bit more work, but I think it's useful
[07:55] <fwereade> dimitern, I *am* raising a slight eyebrow at this being on the top of the list, given that the original bug is given a somewhat wooly "ehh this might be useful to some people" characterisation
[07:56] <fwereade> dimitern, and, hmm it looks like niedbalski just picked up that bug
[07:56] <fwereade> niedbalski, ping
[07:57] <dimitern> fwereade, that's true, the auto-assignment is nice, but not a priority
[07:58] <dimitern> fwereade, and we better do it sanely for all providers than piecemeal like this
[07:59] <fwereade> dimitern, well, I think there's some potentially serious collision between what you're doing and niedbalski is, to begin with
[07:59] <fwereade> dimitern, but regardless, why is the state-server-only bit a priority?
[08:00] <dimitern> fwereade, the bug is triaged as high for 1.21
[08:00] <fwereade> dimitern, ISTM that it's really not -- while the manual floating ip assignment is a straight-up bad idea compared to assign-on-expose
[08:01] <dimitern> fwereade, and assign-on-expose *does* include the case for adding FIP for the state server, surely
[08:01] <fwereade> dimitern, and I'm  just looking at "Just to be clear, I think the current behavior is great, this would be just another option that some orgs may want." in the bug
[08:03] <dimitern> fwereade, for canonistack or similar OS clouds with limited FIP pools is very important, as it's barely usable otherwise (I can bootstrap, but can't deploy another node on lcy01 due to FIPs shortage and lcy02 is 200% slower for me)
[08:04] <dimitern> fwereade, and it's actually very easy to implement the assign-FIP-only-for-JobManageState
[08:04] <fwereade> dimitern, yeah, but that's a pretty arbitrary hack, *especially* given what niedbalski seems to be going off and doing
[08:05] <dimitern> fwereade, ok, I'll put it on hold until I have a chat with niedbalski then
[08:05] <fwereade> dimitern, cool, thanks -- tbh I feel like those two really come down to "do floating ips like we should have in the beginning"
[08:06] <dimitern> fwereade, :) well said sir
[08:06] <fwereade> dimitern, so we should default to giving state servers floating ips, but not give them to anything else until they're running a unit of an exposed service
[08:07] <fwereade> dimitern, please push back hard on the "juju add-floating-ip" thing
[08:07] <fwereade> dimitern, because that's either going to take serious work in the model or get a bunch of NOT LGTMs from me
[08:07] <fwereade> dimitern, and that won't make anybody happy
[08:07] <dimitern> fwereade, of course, I never considered that sane
[08:08] <fwereade> dimitern, <3
[08:08] <dimitern> fwereade, it's a really ugly hack, and we did discuss having a better way to express this in the model, eventually
[08:09] <fwereade> dimitern, I *think* that expose is the right way to express it -- and I *think* that it's essentially consistent with the expose-as-relation plans too
[08:09] <fwereade> dimitern, so I'd really prefer to keep it under that umbrella until we've got really clear use cases that can't be solved that way
[08:10] <dimitern> fwereade, sure, we'll get there sooner or later
[08:35] <fwereade> dimitern, do you have a few minutes to review http://reviews.vapour.ws/r/403/ ?
[08:35] <dimitern> fwereade, sure, looking
[08:46] <mattyw> morning all
[08:49] <dimitern> mattyw, morning
[08:49] <voidspace> morning all
[08:49] <mattyw> dimitern, voidspace morning
[08:49] <dimitern> fwereade, reviewed; a few suggestions, but nothing blocking
[08:49] <dimitern> voidspace, morning
[08:50] <voidspace> dimitern: saw the reviews from you and axw
[08:50] <voidspace> dimitern: I will look at the maas code and see if we need to call release on each one
[08:51] <dimitern> voidspace, great, thanks
[08:54] <voidspace> dimitern: we could also ask *why* they won't release a node in disk erasing state
[08:54] <voidspace> dimitern: although the original bug was about a node that was commissioning
[08:54] <voidspace> dimitern: so we would still have to handle that
[08:55] <dimitern> voidspace, well, strictly speaking until the disk is erased it still contains data relevant to the user that allocated it
[08:55] <voidspace> dimitern: but release could return as a no-op like it does when you call release on a node that is Ready
[08:56] <voidspace> dimitern: because it will return to the Ready state "soon"
[08:56] <voidspace> dimitern: I don't see what the benefit / purpose of the error 409 is
[08:56] <voidspace> dimitern: effectively the state is "releasing"
[08:56] <dimitern> voidspace, is it?
[08:56] <voidspace> dimitern: isn't it?
[08:56] <voidspace> :-)
[08:56] <dimitern> voidspace, so it's no longer Allocated?
[08:57] <voidspace> dimitern: it is "scheduled for release"
[08:57] <dimitern> voidspace, right
[08:58] <dimitern> voidspace, ISTM juju needs to grow the knowledge for all those states wrt StartInstance, StopInstance and AcquireInstance (as a special case of the former)
[08:59] <dimitern> voidspace, but it doesn't have to happen all in once
[08:59] <voidspace> dimitern: we don't look at the status when we call StopInstances
[09:00] <dimitern> voidspace, we don't because we don't really care - the only place we care about status is in the instance updater calling Instances() periodically
[09:00] <voidspace> right, so why would we grow knowledge about those states if we don't care
[09:01] <voidspace> we just care about success or fail
[09:01] <voidspace> unless secretly we do care ;-)
[09:01] <dimitern> voidspace, but yesterday as I was fixing that openstack issue, I would've wanted a way to trigger an update to the cached instance state, if I know something relevant happened (e.g. agent just got down)
[09:02] <TheMue> morning
[09:02] <voidspace> TheMue: morning
[09:02] <dimitern> morning TheMue
[09:03] <dimitern> voidspace, we only care if it helps juju handle better such corner cases, like "what does 409 mean in this context"
[09:03] <voidspace> right
[09:04] <voidspace> dimitern: so we call StopInstances with multiple ids
[09:04] <voidspace> dimitern: which passes those ids (after converting to maas ids) through to MAASObject.CallPost("release", ids)
[09:04] <voidspace> dimitern: the MaaS code takes a *single* id
[09:05] <voidspace> dimitern: I can't yet see how the call is converted into multiple calls to release
[09:05]  * TheMue things sometimes only time and a bit of sleep help. if I'm right I found why my vMAAS networking isn't working.
[09:05] <dimitern> voidspace, what if we continue calling it with multiple ids, unless we get 409, then we retry by calling it for each id
[09:05] <voidspace> dimitern: do you know the mechanism
[09:05] <voidspace> dimitern: sure, I understand the suggestion
[09:05] <dimitern> voidspace, what, what?
[09:05] <dimitern> :) let me see the maas source
[09:05] <voidspace> dimitern: src/maasserver/api/nodes.py
[09:06] <voidspace> dimitern: NodeHandler.release
[09:07] <voidspace> something turns a single api call into multiple calls to the nodehandler methods
[09:07] <voidspace> it's not the operation decorator
[09:07] <dimitern> hmm.. still looking
[09:07] <voidspace> urls_api.py RestrictedResource maybe
[09:09] <dimitern> voidspace, btw which maas version are you testing on?
[09:09] <voidspace> dimitern: 1.7
[09:10] <voidspace> dimitern: found it
[09:10] <voidspace> dimitern: there's a release method that handleds multiple nodes
[09:10] <voidspace> dimitern: it does them all separately
[09:10] <dimitern> voidspace, and returns the first error?
[09:11] <voidspace> dimitern: so one failing will cause an error to be raised, but the others will be done
[09:11] <voidspace> dimitern: well, it concatenates messages
[09:11] <voidspace> if any(failed): raise NodeStateViolation()
[09:11] <voidspace> NodeStateViolation is error 409
[09:12] <dimitern> voidspace, hmm.. so if we parse the response, can we tell which ones failed?
[09:12] <voidspace> dimitern: we could if we cared
[09:12] <dimitern> voidspace, we should care if all the rest are unchanged
[09:13] <voidspace> dimitern: the rest have been released
[09:13] <dimitern> voidspace, sweet!
[09:13] <voidspace> dimitern: error 409 means "Juju doesn't need to care about those nodes any more"
[09:13] <voidspace> and the rest are done
[09:13] <dimitern> voidspace, so we don't care then, but it deserves a comment about it
[09:13] <voidspace> ok, will do
[09:13] <dimitern> voidspace, cheers
[09:15] <dimitern> fwereade, re bug 1301996 - I have a lingering feeling this is caused by service settings reference counting we implemented post 1.16
[09:15] <mup> Bug #1301996: config-get error inside config-changed: "settings not found" <config-get> <cts-cloud-review> <landscape> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1301996>
[09:16] <fwereade> dimitern, I think service settings refcounting has been around longer than that
[09:16] <fwereade> dimitern, I think I tried to track it down and couldn't
[09:16] <fwereade> dimitern, if you can that would *absolutely* be a good use of time though
[09:17] <fwereade> dimitern, and, yeah, it's a complex area, I won't swear to it being bug-free
[09:17] <dimitern> fwereade, when did we cache the settings in the context initially? at relation-joined?
[09:17] <dimitern> fwereade, hmm.. or maybe it was in EnterScope
[09:18] <fwereade> dimitern, config settings don't get cached except within a single hook execution
[09:18] <fwereade> dimitern, oo I just had a thought
[09:18] <fwereade> dimitern, order of execution in refcount ops interacting badly with ReadConfigSettings?
[09:19] <dimitern> fwereade, could be, but that would be hard to reproduce
[09:20] <fwereade> dimitern, new settings doc needs to exist before unit charm-url field changes; old settings doc needs to exist until after it changes; and we need to be prepared for a refresh/retry in ReadConfigSettings anyway
[09:20] <fwereade> dimitern, I bet we're missing one of those things
[09:20] <fwereade> dimitern, most likely the last one, I *think* I remember considering the first two when I wrote it
[09:20] <dimitern> fwereade, I'll have a deeper look
[09:21] <fwereade> dimitern, but I have seen arbitrary changes to order-of-operations landing too
[09:21] <fwereade> dimitern, so they're worth a look as well
[09:21] <fwereade> dimitern, tyvm
[09:29] <dimitern> fwereade, so it seems service.changeCharmOps shouldn't decref the old settings until service's charm url have changed
[09:29] <voidspace> dimitern: any chance of a ship it on the updated diff http://reviews.vapour.ws/r/397/
[09:29] <voidspace> dimitern: when you have a minute
[09:29] <voidspace> and I'll get coffee
[09:30] <dimitern> voidspace, looking
[09:30] <dimitern> fwereade, hmm.. but it does add the decref ops at the very end
[09:30] <axw> voidspace: thanks for verifying the 409 thing.
[09:30] <voidspace> axw: no prob, thanks for the review
[09:31] <fwereade> dimitern, the unit is the tricky one re refcounting
[09:31] <fwereade> dimitern, the unit gets the settings according to its current charm url
[09:31] <fwereade> dimitern, not the service's
[09:31] <dimitern> voidspace, ship it! :)
[09:31] <dimitern> fwereade, ok, I'll have a look there as well
[09:34] <voidspace> dimitern: thanks
[09:45] <dimitern> fwereade, just looking at the code I can see 3 potential issues: in changeCharmOps - 1) if the new settings do not exist yet, we'll generate an Insert op both with createSettingsOp and settingsIncRefOps(..., canCreate=true); 2) in SetCharm: the TODO comment from waigani is correct - an assert will trigger the code calling isAliveWithSession passing the service name as key to the settings collection, which will surely return an error
[09:45] <dimitern> ; 3) in unit.SetCharmURL we're doing an incref(new settings) op, but not asserting the old settings hasn't changed
[09:47] <fwereade> dimitern, I don't see why 3 is a problem (*except* when we're creating the new settings)
[09:47] <fwereade> dimitern, (2) at least should be easy to repro with a TxnHook test
[09:48] <fwereade> dimitern, for (1) I can't remember if we have export_tests for actual settings refcounts but I think we do so that should be reproable as a unit test too
[09:48] <dimitern> fwereade, in unit.SetCharmURL we're never creating the settings, I suppose because we expect service.SetCharm to have done it
[09:49] <fwereade> dimitern, ahh, there could be some way to miss that, couldn't there
[09:49] <dimitern> fwereade, but what if the service charm url changes again during unit.SetCharmURL ?
[09:49] <fwereade> dimitern, would depend on a very annoying and quick sequence of service charm changes
[09:49] <fwereade> dimitern, I *think* that should never matter
[09:49] <dimitern> fwereade, right
[09:50] <fwereade> once a unit has set its charm url, it's got a ref to the settings, so the service changing charm url shouldn't be an issue, the refcount shouldn't hit 0
[09:50] <dimitern> fwereade, so fixing these 3 issues and writing proper tests should fix the bug.. but I need to find a way to reproduce it first
[09:51] <fwereade> dimitern, well, if you can come up with a precise sequence of unit tests that will repro it -- which you should be able to with txn-hooks, I think -- that's good enough for me
[09:51] <TheMue> f**k
[09:51] <TheMue> sorry
[09:51] <fwereade> dimitern, trying to repro it in a running system will be insanely timing-dependent, I think
[09:51] <dimitern> fwereade, if the service changes the url again that will decref the old settings, and since a unit is still upgrading to the old url it will incref the old settings and they'll stay there forever... hmm or perhaps not, because next time the unit upgrades it will decref them
[09:52] <fwereade> dimitern, that's the idea, yeah
[09:53] <dimitern> fwereade, well, you can always add sleeps to trigger it :) I'll dig in some more
[10:00] <dimitern> jam1, jam3, standup?
[10:05] <voidspace> dimitern: TheMue: I just got dumped out
[10:05] <voidspace> dimitern: TheMue: I think I have to authenticate again
[10:05] <dimitern> voidspace, ha, sorry
[10:08] <voidspace> jam1: ping
[10:08] <voidspace> jam1: standup?
[11:00] <wallyworld_> fwereade: you free now?
[11:01] <fwereade> wallyworld_, sure
[11:01] <wallyworld_> coolio, meet you in hangout
[11:25] <fwereade> wallyworld_, ah ffs, waiting for plus.google.com
[11:25] <wallyworld_> fwereade: connection sucks, but we had sorta finished anyway
[11:26] <wallyworld_> the url stuff looked ok at first go, so long as all existing tests are kept
[11:26] <wallyworld_> the existing test reflect what we want/need to do
[12:15] <rogpeppe> fwereade: i know there was talk of allowing a charm to provide feedback to the client (eg. to indicate that something is wrong without necessarily entering a hook error state). has anything like that been implemented yet?
[12:53] <voidspace> dimitern: you're ocr today :-)
[12:53] <voidspace> dimitern: http://reviews.vapour.ws/r/404/
[12:53] <voidspace> my PR is 404...
[12:54] <dimitern> voidspace, sure, will look shortly
[12:55] <voidspace> dimitern: np
[12:55] <voidspace> dimitern: I'll be going on lunch in a bit, no hurry
[14:28] <dimitern> voidspace, sorry I got distracted; you've got a review
[14:32] <voidspace> dimitern: thanks
[14:32] <voidspace> ctrl-w in wrong window...
[16:38] <mfoord> dimitern: ping
[16:38] <mfoord> dimitern: the new maasEnviron.ListNetworks
[16:39] <mfoord> dimitern: other than the return type, how will it be different from maasEnviron.getInstanceNetworks ?
[16:40] <mfoord> and the fact that it takes an instance id rather than an instance.Instance
[18:28] <dimitern> mfoord, sorry, was afk; getInstanceNetworks is close to what ListNetworks needs, but there's some CIDR parsing/validation logic in maasEnviron.setupNetworks which I'd like to happen when ListNetworks processes the result of getInstanceNetworks
[19:09] <mfoord> g'night all
[19:28] <hazmat> some strange relation behavior.. JUJU_REMOTE_UNIT is not in relation-list
[19:30] <hazmat> hmm
[19:52] <thumper> fwereade: hey there
[19:52] <thumper> fwereade: we should set up regular calls again
[20:11] <davecheney> menn0: thanks for offering to review that mega branch
[20:11] <davecheney> i'll address the copywrite issues and push it again now
[20:13] <menn0> davecheney: ok, let me know when it's there
[20:14] <davecheney> menn0: done
[20:14] <menn0> davecheney: looking
[20:15] <menn0> davecheney: i don't see it on RB. it this a pre-RB branch?
[20:16] <davecheney> menn0: nope
[20:16] <davecheney> rb has shat itself
[20:16] <davecheney> see email from wallyworld_
[20:16] <davecheney> waigani_:
[20:16] <menn0> davecheney: ok. i haven't seen that yet.
[20:16] <davecheney> it is possible that this branch was what caused the corronary
[20:17] <thumper> heh
[20:18] <menn0> davecheney: i don't have that email
[20:18] <menn0> davecheney: but i'll review on GH
[20:19] <davecheney> Jesse Meek
[20:19] <davecheney> 6:28 AM (50 minutes ago)
[20:19] <davecheney> Reply to all
[20:19] <davecheney> to juju-dev
[20:19] <davecheney> The latest three reviews on GitHub (#1103,#1102,#1101) I cannot see in Review Board. Do we have a loose wire?
[20:19] <menn0> davecheney: right, but you said wallyworld_  :)
[20:20] <davecheney> yes, i corrected that on the next line
[20:20] <davecheney> sorry for the confusion
[20:20] <menn0> sorry, I thought you were intending to write something to jesse :)
[20:20] <menn0> anyway... reviewing!
[20:21] <menn0> davecheney: GH/my browser paused for much longer than normal when opening the diff :)
[20:22] <waigani_> davecheney: I don't think it was your branch, there are two PRs before yours that also have not popped up on RB
[20:24] <davecheney> menn0: it's a mere 2,400 lines of change
[20:25] <menn0> davecheney: most of it is very mechanical though... you're wearing out my scroll wheel :)
[20:25] <davecheney> menn0: juju.go and constants.go at the root are the only real changes
[20:25] <davecheney> anything which referenced those types s/params/juju
[20:25] <davecheney> there are no other non mechnical changes
[20:27] <davecheney> menn0: oh, you found that
[20:27] <menn0> :)
[20:28] <davecheney> there are only two of those
[20:28] <davecheney> in the other places I jj "github.com/juju/juju"
[20:28] <davecheney> please don't mention the war
[20:28] <davecheney> (about packages0
[20:30]  * menn0 goes to install more ram just so he can finish this review
[20:31] <davecheney> dat pagination
[20:31] <davecheney> btw, go imports is great
[20:32] <davecheney> alias gi='goimports -w .'
[20:32] <davecheney> edit edit,
[20:32] <davecheney> gi
[20:32] <davecheney> go test
[20:32] <davecheney> next package
[20:32] <menn0> yeah... I have it hooked up to Emacs when I save a .go file.
[20:32] <menn0> lifechanging
[20:34] <cmars> davecheney, waigani_ thanks for reviewing my id branch (#1081). i've pushed fixes but a couple of questions for y'all
[20:34] <cmars> davecheney, question for you here, http://reviews.vapour.ws/r/338/#comment2894
[20:37] <menn0> davecheney: done
[20:37] <davecheney> menn0: ta
[20:37] <davecheney> i've fixed that nit
[20:37]  * menn0 dips his mouse in a glass of water to stop the smoke
[20:37] <cmars> waigani_, question for you, more of a general login security thing: http://reviews.vapour.ws/r/338/#comment3469
[20:38] <waigani_> cmars: be with you in a sec
[20:41] <waigani_> menn0: http://reviews.vapour.ws/r/400/
[20:43] <davecheney> hmm, have all our bots died ? https://github.com/juju/juju/pull/1103
[20:44] <davecheney> cmars: good point
[20:44] <davecheney> given that it needs to exist in that odd form
[20:44] <davecheney> just reply to that comment and tell me to pull my head in
[20:45] <waigani_> cmars: replied. Wrapping in common.ErrBadCreds makes sense, agreed
[20:46] <cmars> davecheney, waigani_ thanks
[20:48] <waigani_> davecheney: have you tried using the rbt to push to RB?
[20:50] <davecheney> waigani_: nup
[20:50] <davecheney> waigani_: but the bot i was worried about is the commit bot
[20:50] <davecheney> which hasn't picked up the $$merge$$ in 10 minutes
[20:51] <waigani_> oh..
[20:54] <waigani_> I need to get out of this house, going to work in town. I'll be offline for 20min while I drive in.
[21:15] <davecheney> thumper: i found a bug in set.Strings yesterday
[21:15] <davecheney> http://paste.ubuntu.com/8948366/
[21:15] <davecheney> hold
[21:15] <davecheney> thumper: http://paste.ubuntu.com/8948372/
[21:15] <davecheney> ^ this one
[21:25] <menn0> davecheney: not sure if you've seen but looks like the build bot worked eventually but there's test failures
[21:26] <davecheney> yeah
[21:26] <davecheney> looking into the maas fialures now
[21:38] <alexisb> thumper, ping
[21:38] <thumper> alexisb: hey there
[21:38] <alexisb> hey thumper you have a second?
[21:38] <alexisb> for a hangout
[21:39] <thumper> yep, just getting off with cmars
[21:50] <bodie_> anyone know how to use jc.TimeBetween?
[21:50] <bodie_> I think there might be a typo in its Check
[21:59] <davecheney> thumper: ping, 08:15 < davecheney> thumper: http://paste.ubuntu.com/8948372/
[21:59] <thumper> yep... otp right now, with you very soon
[22:00] <davecheney> kk
[22:00] <davecheney> just found this bug is affecting the maas code
[22:09] <thumper> davecheney: I need to go get my daughter from school - she is unwel
[22:09] <thumper> davecheney: can we chat when I get back?
[22:09] <davecheney> sure
[22:23] <thumper> davecheney: our regular 1:1 hangout
[22:28] <davecheney> thumper: coing
[22:28] <thumper> like a bird ;-)
[22:29] <davecheney> imma here
[22:29] <davecheney> did you mean the standup hangout ?
[22:29] <thumper> hmm... I'm there too.
[22:29] <thumper> no, our 1:1
[22:29]  * thumper tries again
[22:47] <menn0> cmars: i'm looking at review 338
[22:47] <menn0> cmars: has the "juju server" command been given general approval?
[22:51] <cmars> menn0, might need more discussion, now that I think about it
[22:51] <cmars> menn0, how about I break out the subcommand into a separate PR?
[22:52] <menn0> cmars: or just disable it pending further discussion (but leave the code there)
[22:52] <menn0> cmars: just don't hook it up
[22:55] <cmars> menn0, it's currently not hooked up to the cmd/juju supercommand
[23:03] <menn0> cmars: ok, that's fine then. I haven't gotten to that part of the PR yet. I was asking based on the description of the changes.
[23:23] <davecheney> menn0: waigani thumper https://github.com/juju/utils/pull/84
[23:23] <menn0> davecheney: still doing another review but will get there soon
[23:23] <davecheney> kk
[23:23] <davecheney> there is no rbt on that repo
[23:24] <davecheney> oh
[23:24] <davecheney> actaully there is
[23:24] <davecheney> wadda you know
[23:24] <davecheney> http://reviews.vapour.ws/r/406/
[23:24] <waigani> yay, rb is over it's hangover
[23:24] <davecheney> \o/