[00:54] <thumper> jcastro: 2 minutes... funny man
[02:21] <fss> niemeyer: ping
[02:21] <niemeyer> fss: Heya
[02:21] <fss> niemeyer: just finished mergin the patch
[02:22] <fss> niemeyer: I noticed that you guys moved testServer, and renamed PrepareResponse to Response, so my patch now breaks the build, should I fix it or send another patch?
[02:24] <fss> niemeyer: I prefer to get it right in this patch, but I'd like to know what you think about it
[02:25] <niemeyer> fss: You're right
[02:25] <niemeyer> fss: It should be fixed before going in indeed
[02:26] <niemeyer> fss: I did the change.. we now have a single http test suite instead of one per package
[02:26] <niemeyer> fss: Together with a few other fixups
[02:28] <fss> ops
[02:28] <fss> looks like I broke iamtest in the merging process
[02:33] <fss> and there we go
[02:34] <fss> niemeyer: ready for merging https://codereview.appspot.com/6855104/
[02:35] <fss> time to sleep
[02:36] <fss> niemeyer: tomorrow I will address your points in the user policy cl (https://codereview.appspot.com/6858081/)
[02:36] <fss> niemeyer: good night :-)
[02:36] <niemeyer> fss: Awesome, thanks a lot
[02:36] <niemeyer> fss: I'll merge this in the morning tomorrow
[02:37] <fss> niemeyer: thank you
[06:40] <fwereade> wallyworld, ping
[06:41] <wallyworld> fwereade: hi
[06:42] <fwereade> wallyworld, heyhey
[06:42] <wallyworld> w'zap?
[06:42] <fwereade> wallyworld, would you expand briefly on the impact of the instance-id metadata thing you proposed?
[06:42] <wallyworld> sure, what do i need to clarify?
[06:43] <wallyworld> sorry about the codereview diff, no idea what happened there
[06:43] <wallyworld> lp got it correct
[06:43] <fwereade> wallyworld, am I right in thinking that the ec2-style instance-id data itself is useless, because we can't use it in the api... but it's better than ""?
[06:43] <fwereade> wallyworld, proposing again will sometimes fix that apparently
[06:44] <wallyworld> that's my understanding - the numeric id seems to be used in the API for specifying the server etc
[06:44] <wallyworld> and old openstacks do not give us uuid
[06:44] <wallyworld> eg hp cloud
[06:44] <fwereade> wallyworld, ey up, there are *three* identifiers? crap
[06:45] <wallyworld> yeah - ec2 style, nymeric id, uuid
[06:45] <wallyworld> but numeric id deprecated afaik
[06:45] <wallyworld> fwereade: maybe you can help me with somethong
[06:45] <fwereade> wallyworld, so numeric is always available but deprecated, ec2 style is just useless, and uuid is what we want but not always available
[06:46] <wallyworld> yes
[06:46] <wallyworld> afaik
[06:46] <fwereade> wallyworld, ouch -- ok, go on, how can I help?
[06:46] <wallyworld> i had to use quantal image on hp cloud cause i was uploading self compiled tools
[06:46] <wallyworld> and when i did a deploy mysql after a bootstrap went ok, i got
[06:46] <wallyworld> error: cannot get latest charm revision: charm info errors for "cs:quantal/mysql": entry not found
[06:47] <wallyworld> yet with precise it did not complain
[06:47] <fwereade> wallyworld, quick g+? quicker than typing I think
[06:47] <wallyworld> sure, sec, gotta get headphones
[06:48] <fwereade> wallyworld, https://plus.google.com/hangouts/_/b9cd7e78738e463f4299770d1f17a3ffece7c960?authuser=0&hl=en
[07:15] <rogpeppe> mornin' all
[07:33] <fwereade> rogpeppe, heyhey
[07:33] <wallyworld> morning
[07:33] <wallyworld> fwereade: just checked the code - provider instance id returns ServerDetails.Id, which will be wrong on Canonistack
[07:34] <wallyworld> since fetchInstanceId uses UUID on Canonistack
[07:34] <fwereade> wallyworld, ah balls :(
[07:34] <wallyworld> but it will work on HP Cloud :-)
[07:34] <fwereade> haha
[07:34] <wallyworld> i think i should land as is and then follow up with a branch to sort it out
[07:35] <wallyworld> oh look at the time. beer o'clock :-D
[07:39] <fwereade> wallyworld, I think I'd prefer holding off a little, see if we can figure something out at the sprint that doesn't regress
[07:40] <fwereade> wallyworld, I think the python workaround for weird instance ids could still come into play here in a generally useful way
[07:45] <wallyworld> ok
[07:46] <wallyworld> we already cater for int ids vs string ids (not the uuid, id can be string or int)
[07:46] <wallyworld> so a bit more logic won't hurt
[09:44] <dimitern> anyone fancy a quick review? https://codereview.appspot.com/7438044/
[09:47] <dimitern> fwereade: ping
[09:48] <dimitern> fwereade: https://codereview.appspot.com/7425044/ - the wip branch about config changes (added a bunch of TODOs at key points)
[09:56] <TheMue> dimitern: Will look in a few moments.
[09:56] <dimitern> TheMue: cheers
[10:07] <fwereade> dimitern, heyhey, I think I'm going to get an early lunch, I'm kinda tired today, so I might do that a bit later
[10:07] <dimitern> fwereade: no worries
[10:07] <dimitern> fwereade: just a quick look on the charmurl CL perhaps?
[10:08] <fwereade> dimitern, doing that already, was planning to surprise you with it
[10:08] <fwereade> dimitern, underpromise, overdeliver ;p
[10:08] <dimitern> fwereade: :D cool
[10:10] <fwereade> dimitern, https://codereview.appspot.com/7438044/ LGTM
[10:11] <dimitern> fwereade: tyvm
[10:13]  * fwereade decides he can't properly follow the filter changes without more food inside him, bbiab
[10:22] <teknico> rogpeppe, good morning, remember that API design doc you wrote? where is it? can't find it anymore
[10:22] <rogpeppe> teknico: one mo
[10:23] <rogpeppe> teknico: https://codereview.appspot.com/7314085/
[10:25] <teknico> rogpeppe, great, thanks. when is it going to land? :-)
[10:25] <rogpeppe> teknico: when i get some reviews. i still need to do the watcher part, but i can probably land it before that.
[10:45] <dimitern> TheMue: thanks for reviewing https://codereview.appspot.com/7425044/, but since it's WIP still, I'll really like a review on the other CL, which is ready for integration - https://codereview.appspot.com/7438044/
[11:04] <TheMue> dimitern: OK
[11:05] <dimitern> TheMue: :) cheers
[11:06] <rogpeppe> fwereade: ping
[11:14] <TheMue> dimitern: LGTM
[11:14] <dimitern> TheMue: tyvm
[11:14] <TheMue> dimitern: YQ
[11:14] <TheMue> s/YQ/YW/
[11:24] <fwereade> rogpeppe, pong
[11:24] <rogpeppe> fwereade: is there a good reason the API needs to provide Expose and Unexpose rather than SetExposed? i can't think of one right now.
[11:26] <rogpeppe> fwereade: (just seeing an opportunity to lose some boilerplate code)
[11:26] <dimitern> I have an intermittent failure again while running the tests: http://paste.ubuntu.com/5576020/ - there's a bug about it, I updated and tagged it.
[11:26] <fwereade> rogpeppe, IIRC niemeyer's reasoning is that if/when flags are added to control styles of exposure/unexposure it'll be nasty if we have to share the method
[11:26] <rogpeppe> fwereade: ah, ok, that's a good enough reason.
[11:26] <fwereade> dimitern, thanks for that
[11:26] <rogpeppe> fwereade: thanks.
[11:27] <fwereade> rogpeppe, np -- I'm not sure we're actually consistent about that but I think it does hold water
[11:33] <dimitern> i'm getting: dimitern@kubrik:~/work/juju-core$ lbox submit
[11:33] <dimitern> error: Failed to load data for branch .bzr/cobzr/009-uniter-use-charmurl: resource not found
[11:35] <dimitern> anyone knows how to fix that? when proposing it was ok, and the branch is there in lp, trying to push the branch says no revisions or tags to push
[11:35] <dimitern> i just merged trunk before trying to submit
[11:44]  * dimitern lunch
[11:47] <mgz> not sure, cobzr weirdness by the look of it
[11:47] <mgz> you can always work around by not using lbox submit
[12:01] <fwereade> dimitern, reviewed, ping me if anything's unclear
[12:09] <bac> hi all, i've got a 'go get' vs 'cobzr pull' question:
[12:09] <bac> i am using cobzr, am in the 'master' checkout of juju-core, and see it is on rev 949.  i do a 'go get -u -v launchpad.net/juju-core/...' and it does stuff.  however, juju-core is still shown to be rev 949.  i then do a 'bzr pull lp:juju-core' and it fetches rev 950.  what's going on?  why didn't 'go get' go get it?
[12:18] <dimitern> fwereade: cheers, will do
[12:20] <rogpeppe> bac: hmm, i dunno. i haven't seen that issue (but then again i always use bzr pull to get juju-core; i usually use go get -u for goose though)
[12:20] <rogpeppe> bac: what does the go get command print?
[12:21] <bac> rogpeppe: .../bac/work/src/launchpad.net/juju-core> go get -u -v launchpad.net/juju-core/...
[12:21] <bac> launchpad.net/juju-core (download)
[12:21] <bac> ... (other dependent packages)
[12:21] <rogpeppe> bac: ha, that's not very useful is it?!
[12:22] <bac> nope
[12:22] <dimitern> mgz: help!
[12:22] <mgz> hey dimitern :)
[12:22] <dimitern> mgz: I'm still adding changes to the branch and both bzr and cobzr refuse to push them
[12:23] <dimitern> mgz: bzr info says something weird: http://paste.ubuntu.com/5576144/
[12:23] <rogpeppe> bac: you could try go get -u -x -v and see what it prints then
[12:24] <dimitern> mgz: while previous branches say submit branch: (the same as the push one, not master)
[12:24] <rogpeppe> dimitern: i've seen that before. try bzr push --remember
[12:24] <rogpeppe> dimitern: (istr that didn't work before, but worth trying)
[12:24] <dimitern> rogpeppe: tried that - both with --remember, then with that + --overwrite - still the same
[12:24] <mgz> yeah, you can always respecify where you're trying to push
[12:25] <mgz> ah, I see...
[12:25] <mgz> the launchpad side branch is borked
[12:25] <dimitern> the result still is No new revisions or tags to push.
[12:25] <rogpeppe> ah, that was it!
[12:25] <bac> rogpeppe: i see the problem.  it is just doing a 'bzr pull' but the parent branch is itself, so it doesn't do anything
[12:25] <dimitern> hmm ?!
[12:25] <dimitern> how to fix this? recreate the branch anew?
[12:25] <rogpeppe> dimitern: i think so
[12:26] <dimitern> rogpeppe: ok, i'll try that, bugger...
[12:26] <rogpeppe> dimitern: but mgz will *know* rather than think :-)
[12:26] <dimitern> mgz: :) ?
[12:26] <mgz> `bzr info lp:~dimitern/juju-core/009-uniter-use-charmurl`
[12:26] <mgz> you've accidentally made it a checkout of a branch on your machine
[12:26] <dimitern> mgz: exactly the same result as without the lp:...
[12:27] <rogpeppe> bac: hmm, it's odd that "bzr pull" worked when you tried it... ah, you probably used bzr pull from an explicit target
[12:27] <dimitern> mgz: except Repository branch (format: unnamed)
[12:27] <bac> rogpeppe: yes, manually i did 'bzr pull lp:juju-core'
[12:27] <mgz> the point is, you never want a checkout on a remote box, that refers to a local branch, how does launchpad know to get revisions from your local machine? :)
[12:27] <rogpeppe> bac: if you do bzr pull --remember lp:juju-core/trunk, it might fix the issue
[12:27] <bac> rogpeppe: i'm recreating my workspace as i'm in-between tasks now
[12:28] <dimitern> mgz: I always did this - with cobzr - bzr checkout -b branchname - and it does a lightweight checkout from the local master (or whatever other branch i'm on)
[12:28] <mgz> dimitern: I suggest, with 009 checked out in your local branch, you push to lp:~dimitern/+junk/009 so I can get a copy
[12:29] <rogpeppe> dimitern: i think the problem i had might've been caused by interrupting a push.
[12:29] <mgz> then we can fix what exists in your local repo and on launchpad
[12:29] <dimitern> rogpeppe: I *might* have done that
[12:29] <dimitern> mgz: it's getting pushed now
[12:30] <dimitern> rogpeppe: in fact I remember merging trunk and running lbox  submit, then remembered to run the tests and ^C it
[12:30] <dimitern> to run them
[12:31] <dimitern> so that's what might have happened
[12:31] <rogpeppe> dimitern: seems plausible. i often get a bzr crash popup when i ^C.
[12:31] <dimitern> rogpeppe: I also did every time, until I forbade it from showing these
[12:32] <dimitern> mgz: it's there
[12:32] <mgz> ta, branched
[12:33] <mgz> okay, that looks okay
[12:33] <dimitern> so what now?
[12:33] <mgz> the easiest thing to do is probably just delete the existing branch via the web ui and repush to the same location
[12:34] <dimitern> mgz: ok, I'll do it
[12:34] <mgz> we could fixup the dangingness over sftp but it's in your namespace so I can't do it for you
[12:35] <dimitern> mgz: pushed, just running tests and will try to submit again
[12:35] <mgz> yup, that looks good
[12:36] <dimitern> so the push was incomplete and that's why lp shows "this branch have not been pushed yet"
[12:36] <mgz> nope
[12:37] <mgz> cobzr had somehow screwed up the actual remote branch, it was a broken checkout, not a branch at all
[12:37] <dimitern> mgz: aah
[12:38] <dimitern> mgz: arcane stuff.. so in the future if this happens i'll just delete the branch and push it again
[12:38] <mgz> presumably it has some messed up logic when it gets interrupted, bzr itself gets this stuff right
[12:38] <mgz> unless you explictly tell it to mess around with remote branches
[12:38] <dimitern> which --overwrite does?
[12:38] <mgz> nope
[12:38] <dimitern> btw I noticed a bug on the LP Code page
[12:39] <mgz> you'd need to do something like `bzr switch -d lp:~/+junk/break ~/localbranch` or something
[12:39] <dimitern> when the branch was shown as not pushed yet it said "use bzr --use-existing lp:~..." but there's no such bzr option
[12:41] <mgz> huh, file a bug! https://bugs.launchpad.net/launchpad/+filebug
[12:42] <mgz> I guess it means the --use-existing-dir flag on the branch command?
[12:45] <mgz> hm, and bzr doesn't let you abuse switch to break remote branches, because we require a remote working tree. wonder what fun cobzr does to break that.
[12:47] <dimitern> mgz: I copied it as it was suggested, filed bug 1137716
[12:47] <dimitern> where is the mup?
[12:47] <dimitern> https://bugs.launchpad.net/launchpad/+bug/1137716
[12:49] <mgz> hm, I think mup just logs, and ubot talks bugs?
[12:49] <dimitern> well, before _mup_ it responded when you say bug something
[12:50] <dimitern> _mup_: hey :P
[12:50] <mgz> maybe mup is just on a tea break...
[12:53] <dimitern> maybe if we kick it it'll come to senses and rejoin
[12:57] <fwereade> rogpeppe, would you remind me of your reasoning for wanting to arbitrarily vary mongo/api ports across state servers within the same environment?
[12:59] <dimitern> mgz: who controls these bots anyway? webops?
[12:59] <rogpeppe> fwereade: a) i think it's trivial to allow b) it's just possible we might want to allow hosting of several different API servers or mongo servers on the same machine.
[13:00] <rogpeppe> fwereade: i think it's reasonable to leave the door open
[13:01] <fwereade> rogpeppe, where are you planning to store this info? I am curently not seeing the triviality at all
[13:04] <fwereade> rogpeppe, and I really don't see the use case for multiple mongo/api servers *for the same env* on the same machine
[13:04] <fwereade> rogpeppe, not that we could do that now anyway, because it seems to be hardcoded in the environs anyway
[13:05] <rogpeppe> fwereade: the info can be stored alongside the server addresses; i don't think there's much difficulty there.
[13:06] <fwereade> rogpeppe, and where does it come from?
[13:07] <rogpeppe> fwereade: machine agents can publish their own addresses in the state. at least, that was my plan for the HA stuff.
[13:09] <fwereade> rogpeppe, sure, that bit makes sense -- but it also needs to be stored in bootstrap state, and stored on (?) the machines in state as we create them, and threaded though the environ interface, etc etc
[13:10] <rogpeppe> fwereade: as do the server addresses, no? i'm not sure that adding a port makes much difference.
[13:11] <fwereade> rogpeppe, well, at the moment we have a one-element list of instances stored in bootstrap state and a hardcoded port in each environ
[13:12] <rogpeppe> fwereade: except the actual state.Info takes a list of (address, port) pairs. and that seems to work ok, and provides flexibility for the future.
[13:14] <fwereade> rogpeppe, at the moment there is sham flexibility in state.Info, and absolutely no accommodation for all the hard work of actually making it flexible
[13:16] <rogpeppe> fwereade: the flexibility in state.Info is genuine. without that we *can't* make anything else flexible. i think of state.Info as the low level thing - individual environments can be (are currently) less flexible.
[13:17] <fwereade> rogpeppe, I'mnot proposing a change to state.info
[13:18] <rogpeppe> fwereade: ok. i thought you were proposing a data structure from which state.Info was always derived that was less flexible.
[13:19] <rogpeppe> fwereade: perhaps i'm not getting your drift at all, in fact...
[13:19] <fwereade> rogpeppe, I don;t even care about the Servers struct
[13:19] <rogpeppe> fwereade: ok, so where are you coming from?
[13:21]  * TheMue enjoys now lunch
[13:26] <fwereade> rogpeppe, basically, I'm confused and frustrated because ISTM that you have hardcoded ports across the board and are now telling me it needs to be fixed and it's "trivial" to do so
[13:27] <fwereade> rogpeppe, when, actually, it is not trivial to do so because unpicking environs is melting my brain badly enough already
[13:29] <rogpeppe> fwereade: i don't think i've hardcoded ports across the board, have i?
[13:29] <rogpeppe> fwereade: they're hardcoded in the two providers only
[13:30] <rogpeppe> fwereade: because currently there is only one server
[13:30] <fwereade> rogpeppe, and in each of those providers you use the magic hardcoded ports in two very different places -- bootstrap and StateInfo -- without considering how we're meant to get from one to the other
[13:31] <rogpeppe> fwereade: yes, we definitely do need a better scheme for storing bootstrap info. that's been on the cards for a long time. but it's not something that needs to be fixed right now, is it?
[13:33] <rogpeppe> fwereade: in my head, bootstrap is responsible for a) making sure a server is started and b) letting any clients know how to get to that server. the fact that the port is hardcoded doesn't mean that logic doesn't still apply, or that future changes won't mean that a given provider might not give itself more flexibility.
[13:34] <fwereade> rogpeppe, ISTM that starting a new state server on a non-standard port requires work that hits state, environs, and the provisioner
[13:34] <fwereade> rogpeppe, just as does being able to pick the frickin' series we start a machine with
[13:35] <fwereade> rogpeppe, and I am not inclined to react positively to "we'll figure it out later" sorts of proposals that appear to share the thinking that got us into this situation in the first place
[13:35] <rogpeppe> fwereade: starting a new state server requires work on all those things. the "non-standard" port part isn't much of the problem at all, is it?
[13:36] <rogpeppe> fwereade: i'm not sure i see where your current problem is.... hold on, shall we G+ this?
[13:37] <fwereade> rogpeppe, hm, I need to eat something else -- can we say 20 mins from now?
[13:37] <rogpeppe> fwereade: ok, cool. i'll have a bite too
[13:59] <rogpeppe> fwereade: ready when you are
[14:00] <fwereade> rogpeppe, https://plus.google.com/hangouts/_/44a57aa5ead51164a91eef9e4fac5ac528a8dfe1?authuser=0&hl=en
[14:23] <fwereade> rogpeppe, oh yeah, one other thing: the "state" server cert/key are used for state *and* the api, and this is fine, but that means they're not so much *State*ServerCert as ServerCert -- right?
[14:23] <rogpeppe> fwereade: yeah
[14:24] <rogpeppe> fwereade: i think i am tending towards the idea that the environment should be given the CA and generate its own certificates, BTW
[14:25] <fwereade> rogpeppe, offhand, that makes sense to me
[14:25] <rogpeppe> fwereade: that way we can potentially have certficates that actually reflect the DNS name of the server, which is how they're usually used.
[14:26] <rogpeppe> fwereade: also, having a unique identity for each server enables certain interesting architectural possibilities, i think.
[14:26] <fwereade> rogpeppe, +1 to that
[14:28] <dimitern> I have a nasty 1000 lines panic in cmd/juju on tip
[14:28] <dimitern> jujud actually
[14:29] <dimitern> http://paste.ubuntu.com/5576401/
[14:30] <fwereade> dimitern, what's line 226 of your filter.go?
[14:30] <dimitern>                 case _, ok = <-configw.Changes():
[14:31] <fwereade> dimitern, I think I suggested enough to fix all that with my `var configChanges <-chan state.Settings` lines
[14:31] <dimitern> yep, I reproduced it again
[14:31] <dimitern> fwereade: no, no - I haven't even started my changes yet
[14:31] <dimitern> fwereade: that's fresh trunk at tip
[14:31] <dimitern> I'm filing a bug
[14:32] <fwereade> dimitern, are you totally sure you're running the right code?
[14:32] <dimitern> fwereade: actually, it's not trunk, it's my 010 branch, but i merged trunk just now
[14:33] <dimitern> fwereade: the error is not due to my changes - it's in cmd/jujud
[14:33] <dimitern> fwereade: or is it?.. I'll try trunk
[14:33] <fwereade> dimitern, the error is precisely due to your changes
[14:33] <fwereade> dimitern, make the configChanges suggestion
[14:34] <fwereade> s/make/try/
[14:34] <dimitern> fwereade: oh :) sorry then for the noise, but i'm getting jumpy and running the tests all the time now :)
[14:34] <mgz> :)
[14:34] <fwereade> dimitern, understood, np, better to be jumpy than complacent ;)
[15:03] <fss> niemeyer: morning :)
[15:10] <niemeyer> fss: Yo!
[15:10] <niemeyer> Morning all
[15:22] <rogpeppe> niemeyer: hiya!
[15:22] <niemeyer> rogpeppe: Yo
[15:22] <rogpeppe> niemeyer: scheduler changes just landed... exciting!
[15:22] <niemeyer> rogpeppe: How're things going?
[15:22] <rogpeppe> niemeyer: pretty good
[15:22] <niemeyer> rogpeppe: Wow, seriously?
[15:22] <rogpeppe> niemeyer: yup
[15:22] <niemeyer> rogpeppe: I was afraid that was going to slip past
[15:22] <rogpeppe> niemeyer: various platforms broken, but should be fixed in time
[15:22] <rogpeppe> negronjl: me too
[15:23] <dimitern> niemeyer: hey!
[15:23] <fss> niemeyer: i've updated the user policy CL
[15:23] <rogpeppe> niemeyer: just about to see if the juju tests pass ok with tip
[15:28] <dimitern> fwereade: ping
[15:28] <rogpeppe> niemeyer: all tests pass, which is good
[15:28] <niemeyer> fss: Superb, I'll do a round and merge them
[15:29] <niemeyer> rogpeppe: I should do a test with mgo
[15:29] <rogpeppe> niemeyer: definitely
[15:29] <niemeyer> rogpeppe: There was visible contention impact in massive workloads
[15:29] <niemeyer> rogpeppe: and the underlying logic was actually coded to benefit of parallelism
[15:29] <rogpeppe> niemeyer: have you got any benchmarks?
[15:29] <fss> niemeyer: thanks
[15:29] <niemeyer> rogpeppe: I had one submitted by a user long ago.. I'll see if I can find it
[15:30] <fss> niemeyer: lunch time, brb :)
[15:30] <rogpeppe> niemeyer: the scheduler looks like a *huge* improvement - lots of web servers will benefit
[15:30] <niemeyer> rogpeppe: Just doing GOMAXPROCS=4 already bumped performance in a relevant way.. I'm excited about what this can mean now
[15:30] <niemeyer> fss: enjoy!
[15:32] <dimitern> fwereade: not sure what you meant by f.SetCharm() should only return after the charm is stored in state - it's just selecting on channels, it's not doing the actual setting of the charm
[15:34] <dimitern> fwereade: should I make another ->chan to notify f.SetCharm the deed is done and return?
[15:39]  * TheMue steps out for some time, final tasks to do during office time before leaving tomorrow morning
[15:50] <fwereade> dimitern, if it's not setting the charm then we're definitely racy
[15:51] <fwereade> dimitern, +1
[15:51] <fwereade> dimitern, the alternative is to watch the unit until it gets the right charm url
[15:52] <fwereade> dimitern, don't really have a position on which is better
[15:52] <fwereade> dimitern, regardless, we should not store local uniter state referencing a charm until we have increffed it remotely
[15:53] <dimitern> fwereade: I have a preliminary solution: (not tested yet) http://paste.ubuntu.com/5576623/
[15:54] <dimitern> fwereade: and charmChanged is written to whatever SetCharmURL returns
[15:55] <fwereade> dimitern, feels a bit iffy but I can't quite say why... I might be leaning towards a watch on the unit in deploy, actually
[15:56] <dimitern> fwereade: well, the for loop is ugly obviously
[15:56] <fwereade> dimitern, I'd definitely prefer two selects in series
[15:56] <dimitern> fwereade: but not sure how to do the watch you suggested
[15:57] <dimitern> fwereade: just a loop doing u.Refresh and comparing the curl?
[15:58] <fwereade> dimitern, pretty much -- just refresh the unit every time you get a change, and stop when it matches
[15:58] <fwereade> dimitern, I'm not wild about that either
[15:58] <fwereade> dimitern, hmm
[15:58] <dimitern> fwereade: how about adding a watcher on state.unit?
[15:59] <dimitern> even worse maybe..
[15:59] <fwereade> dimitern, ah, that was what I meant by watching the unit
[15:59] <fwereade> dimitern, it'd be a selct loop, refreshing the unit when we got a change notification for it
[15:59] <fwereade> dimitern, it might be smarter to keep it in the filter
[16:00] <dimitern> fwereade: i hear you, but probably it's best to see this hands on and discuss it in person :)
[16:02] <dimitern> fwereade: even though I have a some idea how watchers work, I'm yet to dig in their internals
[16:02] <fwereade> dimitern, stick with the filter approach for now
[16:03] <fwereade> dimitern, just be sure that you can't deadlock
[16:03] <dimitern> fwereade: so for + 2 selects in SetCharm? otherwise fine for now?
[16:04] <dimitern> fwereade: the filter and uniter run in separate go routines, right? so long we didn't call SetCharm from the filter it should be fine I think
[16:04] <fwereade> dimitern, yeah, I think it'll be ok
[16:05] <fwereade> dimitern, sorry, not concentrating very well today
[16:05] <dimitern> fwereade: I understand completely :) I'm a bit worried I need to stop soon to get my luggage ready
[16:06] <dimitern> fwereade: so one select for tomb+<-setCharm; one for <-charmChanged ?
[16:06] <fwereade> dimitern, I think you want the tomb in the second one too
[16:07] <dimitern> :) just saw that and was about to say it!
[16:07] <fwereade> dimitern, cool :)
[16:53] <dimitern> off to pack and get ready
[16:53] <dimitern> see some of you in person on monday :)
[16:53] <dimitern> have a nice flights guys
[16:54] <mgz> see you soon dimitern!
[17:42] <niemeyer> fss: ping
[17:44] <niemeyer> fss: First branch is in, waiting for conflict resolution on second
[18:33] <rogpeppe> i'm just off now. see y'all in Atlanta!
[18:34] <rogpeppe> niemeyer: except you... first sprint with no gustavo. :-(
[18:38] <bac> has anyone seen this error before:
[18:38] <bac> juju status
[18:38] <bac> error: cannot log in to admin database: auth fails
[18:39] <bac> this is against a bootstrapped instance on ec2
[18:46] <niemeyer> rogpeppe: Yeah, I didn't even know there was going to be a sprint