[00:15] <wwitzel3> davecheney: so I think I need to make two methods in runlistener.go .. where I am getting confused is the JujuRunEndpoint isn't part of apiserver
[00:16] <davecheney> wwitzel3: i'm worried I can't really help with this
[00:16] <davecheney> i dunno anything about facades, apart from the fact that they exist
[00:16] <wwitzel3> same here :)
[00:16] <davecheney> when i think of version i think of the version.Version type
[00:16] <davecheney> and i start to get that twitch
[00:16] <wwitzel3> hah
[00:18] <ericsnow> davecheney: your lucky day: https://github.com/juju/juju/pull/731
[00:18] <ericsnow> davecheney: all about facades :)
[00:19] <davecheney> wallyworld_: can you check the builder
[00:19] <davecheney> 140500043
[00:19] <davecheney> AG	instance	i-ab993640	job_name	github-merge-juju
[00:19] <davecheney> + set +x
[00:19] <davecheney> Starting instance i-ab993640
[00:20] <davecheney> it's jammed here
[00:20] <davecheney> i think our ec2 account may have gone over quota
[00:20] <davecheney> ericsnow: /me looks
[00:21] <wallyworld_> davecheney: i can kill and restart?
[00:22] <davecheney> wallyworld_: sure
[00:22] <davecheney> what does the ec2 console say about that instance ?
[00:23] <jose> davecheney, wallyworld_: I'm having problems with bootstrapping on ec2 too
[00:23] <jose> gets stuck
[00:23] <jose> on 1.20.6
[00:24] <wallyworld_> maybe aws is having issues
[00:24] <jose> I'm updating my PC just in case it's any dependencies (don't think so)
[00:24] <jose> but will try bootstrapping again after that
[00:25] <davecheney> jose: are you using canonicals' account ?
[00:26] <jose> davecheney: nope, personal account. just a community charmer here.
[00:26] <davecheney> intersting
[00:27] <jose> brb
[00:38] <menn0> ericsnow: PTAL at https://github.com/juju/juju/pull/724 if you have a chance
[00:38] <menn0> ericsnow: it turns out there was a very easy way to avoid that race using mgo/txn asserts
[00:38] <ericsnow> menn0: oh good
[00:39] <menn0> ericsnow: so thanks for pushing on this as it made me think harder :)
[00:39] <ericsnow> :)
[00:39] <menn0> ericsnow: the error handling bit is potentially controversial as there's a race there
[00:40] <menn0> ericsnow: but it's a small one and it makes the error returned so much more helpful
[00:40] <menn0> ericsnow: the race is only with the selection of error message as opposed to something functional
[00:41] <ericsnow> menn0: much better
[00:42] <wallyworld_> davecheney: ec2 all good again, your build went through
[00:43] <menn0> davecheney: I need some PRs reviewed, 2 small and already looked at by ericsnow, 1 slightly larger but not crazy
[00:43] <menn0> https://github.com/juju/juju/pull/724
[00:43] <menn0> https://github.com/juju/juju/pull/726
[00:43] <menn0> https://github.com/juju/juju/pull/732
[00:43] <menn0> :)
[00:46] <davecheney> menn0: ok
[00:47] <menn0> davecheney: ta
[00:52] <menn0> how about: "the last upgrade did not complete fully" ?
[00:52] <davecheney> menn0: sure
[00:56] <davecheney> menn0: i'm the wrong person to review these PR's
[00:56] <davecheney> i have no idea what they are doing
[00:57] <menn0> davecheney: ok
[00:57] <davecheney> sorry
[00:57] <davecheney> all i know about your work is it's too subtle for me to seagull review
[00:59] <menn0> davecheney: there's only one, perhaps 2 people who understand it apart from me...
[00:59] <menn0> davecheney: I can explain it to you if that would help
[01:03] <davecheney> menn0: i'm also comfortable that you know what you're doing
[01:03] <davecheney> better than anyone else in this area
[01:03] <davecheney> so if you want LGTM, you've got it
[01:04] <menn0> davecheney: that's a little unsatisfying but I'll take it because I don't want to get stuck on this
[01:33] <axw> sorry wallyworld__, didn't realise you were still reviewing
[01:34] <wallyworld__> axw: i got caught up with 1) a reboot due to network, 2) a critical issue in #juju
[01:34] <wallyworld__> sorry
[01:34] <axw> nps
[01:35] <wallyworld__> axw: i'd love a stress test (should have been done originally) - say 10 go routines all uploading the same data, and then checking the results. what do you think?
[01:36] <axw> wallyworld__: there is one in juju (that's how I got bit :))  - I can write one in this repo
[01:37] <wallyworld__> i think that would be good
[01:51] <davecheney> menn0: ok, i'll have another swing after lunch
[01:52] <davecheney> waigani: are you going to address https://github.com/juju/juju/pull/713
[01:52] <davecheney> and submit it ?
[01:56] <waigani> davecheney: hmph how did I miss that? I'll tidy it up and submit today.
[01:58] <davecheney> i got to the pull requets tab, then filter by my own and open to see which things are outstanding or that CI has nacked
[02:00] <waigani> ah, I totally didn't take in that filter bar.. handy
[02:05] <thumper> menn0: meeting time
[02:05] <wallyworld__> axw: meeting \o/
[02:05] <axw> doh, thanks
[02:06] <jose> wallyworld__, davecheney: btw, I did get to deploy on EC2
[02:06] <jose> it bootstrapped successfully on 1.20.7
[02:06] <wallyworld__> \o/
[02:06] <davecheney> jose: great
[02:06] <davecheney> must have just been a temporary cloud brainfart
[02:06] <jose> yeah, probably
[02:12] <ericsnow> davecheney: FYI, I've addressed your comments on https://github.com/juju/juju/pull/731
[02:13] <wwitzel3> axw: I need you!
[02:13] <axw> wwitzel3: ?
[02:13] <wwitzel3> axw: I have no idea how to go about version the JujuRunEndpoint
[02:14] <wwitzel3> s/version/versioning
[02:14] <davecheney> ericsnow: ta
[02:14] <axw> wwitzel3: I haven't properly versioned an API before :)
[02:14] <axw> I just know it's possible ;)
[02:15] <ericsnow> wwitzel3, axw: isn't jam the API versioning guru?
[02:15] <axw> yup
[02:15] <wwitzel3> axw: thanks
[02:16] <wwitzel3> I was trying to follow examples from apiserver
[02:16] <wwitzel3> but runserver is special
[02:17] <axw> wwitzel3: I think I was pointing at the wrong place when I said do versioning
[02:17] <axw> wwitzel3: it matters more in cmd/juju/run.go
[02:20] <wwitzel3> axw: how do we version that? :)
[02:23] <axw> wwitzel3: I *think* it's a matter of bumping the 0 to 1 in the RegisterStandardFacade call, in apiserver/client/client.go - I don't know how you have to versions though
[02:24] <axw> wwitzel3: maybe two calls, one with 0 and 1: you're better off asking jam
[02:24] <wwitzel3> axw: ok, I think I will ask jam
[02:24] <wwitzel3> haha, yeah ;)
[02:28] <axw> wallyworld__: I don't think we're on the same flight btw, I'm arriving on the 2nd
[02:29] <axw> going to bruges with my brother
[02:29] <wallyworld__> axw: ah yes, i saw the flight number
[02:29] <wallyworld__> and hotel checkin date
[02:32] <perrito666> mmpfh I dont have a decent time windows to go see herge's museum, I hate european business hours
[02:32] <davecheney> wallyworld__: what is happening with the cloudbase reviews
[02:32] <davecheney> they are _still_ in there, clogging up the review queue
[02:33] <wallyworld__> davecheney: i believe wayne and john are helping sheppard them through
[02:33] <perrito666> davecheney: wallyworld__ one of you has the wrong cloud
[02:33] <perrito666> you mean cloudsigma davecheney ?
[02:33] <davecheney> probably
[02:34] <davecheney> i just saw them there, again, lurking
[02:34] <perrito666> davecheney: cloudbase are the windows guys
[02:34] <davecheney> wwitzel3: what's happening with the cloudthingy reviews ?
[02:34] <davecheney> perrito666: true
[02:34] <perrito666> cloudsigma is the pile of patches
[02:35] <davecheney> so, we hired a woman from russian called Anistasia
[02:35] <davecheney> that's like hiring a man from New Zealand called Tim
[02:36] <perrito666> I am sure no one told her jokes about her name ever...
[02:36] <davecheney> perrito666: the joke wasn't Anistasia ...
[02:36] <perrito666> davecheney: mine was :p
[02:36] <perrito666> I now practically nothing about new zealand :p
[02:36] <davecheney> you are speaking to a man from Australia called Dave
[02:36] <davecheney> it's about as dinkum as they come
[02:38] <perrito666> as I said, in argentina there is only one person we know from Australia and its not one you are particularly proud of
[02:38] <perrito666> :p
[02:38] <davecheney> not Steve Erwin ?
[02:38] <davecheney> Tony Abbott ?
[02:38] <perrito666> ah he was from there too?
[02:38] <perrito666> dunno his real name, the guy from the movies with the large knife
[02:39] <davecheney> Paul Hogan!
[02:39] <perrito666> yup
[02:39] <davecheney> nope, we're proper proud of Hoges
[02:39] <perrito666> :D cool
[02:39] <perrito666> (I believe I just now know his real name)
[02:39] <davecheney> that movie is require watching for all primary school children
[02:39] <natefinch> the guy from the movies with the large knife - lol
[02:39] <perrito666> crocodile dundee thats it
[02:40] <natefinch> roight
[02:40] <davecheney> bonza!
[02:40] <perrito666> wow he was in other movies I saw and did not recognize him
[02:40] <ericsnow> that ain't a knife
[02:40] <natefinch> well, past bedtime for me.  Night all.
[02:41] <perrito666> its a good thing local names changein time here, most people gets named after characters from the current successful soap opera
[02:41] <perrito666> so if I hear a not so common name and I can place it I most likely know when you are born
[02:41] <perrito666> natefinch: cheers
[02:43] <davecheney> please, observe, https://www.youtube.com/watch?v=Xn_CPrCS8gs
[02:44] <perrito666> lol
[02:44] <perrito666> I must admit I would like to know australia but the roundtrip takes my whole holiday time :p
[02:46] <perrito666> flights from my city to sydney are between 48 and 62 hs
[02:46] <davecheney> think what it's like to live here
[02:46] <davecheney> you can't go anywhere before having to turn aroud and come back
[02:47] <perrito666> davecheney: we are freaking close, its just there are no available flights
[02:47] <perrito666> have you noticed how close our countries are?
[02:47] <davecheney> perrito666: i can hear you if the wind blows in the right direction
[02:48] <perrito666> davecheney: its a 20h straight flight
[02:48] <ericsnow> wwitzel3: do you mind if I delete that test review request you made on reviewboard?
[02:48] <perrito666> over the pacific
[02:50] <perrito666> we seem to be on the same latitude diff longitude iirc
[02:50] <perrito666> yet for some reason I need to go to peru and then to canada
[02:51] <davecheney> o_O
[02:51] <perrito666> 11000km
[02:52] <perrito666> oh, wait, apparently for 20k I get a more direct flight :p
[02:52] <perrito666> hehe ok, sleep time, see you all tomorrow morning
[03:26] <menn0> anyone able to review this? https://github.com/juju/juju/pull/732
[03:27]  * menn0 has a lot of PRs to get merged
[03:58] <waigani> thumper: https://github.com/juju/juju/pull/713 land it or trash it?
[03:59] <thumper> waigani: I defer to fwereade, can I get you to check with him?
[03:59] <waigani> thumper: yep will do
[04:29] <wwitzel3> ericsnow: go for it
[04:29] <wwitzel3> ericsnow: I'll torch it
[04:29] <ericsnow> wwitzel3: k
[04:30] <ericsnow> wwitzel3: nate had me worried when he said we were cutting over to ReviewBoard immediately :)
[04:32] <thumper> davecheney: https://github.com/juju/juju/pull/733 when you get a moment
[04:32]  * thumper is signing off to pack
[04:33] <jam> hey wwitzel3, how's the cloudsigma stuff coming ?
[04:40] <wwitzel3> jam: good, I've left all my inital reviews if you want to look over them
[04:40] <wwitzel3> jam: I don't have an API key or anything to do any actual testing of the provider, but if you can get me that, I'd be happy to do it.
[04:40] <jam> wwitzel3: did you hear back on any of them? / have you tried to ping Vitaly directly at all?
[04:41] <wwitzel3> jam: nope
[04:41] <wwitzel3> jam: and no one shows up for the meetings anymore .. are those still happening?
[04:43] <jam> wwitzel3: I think you just need to ping Vitaly, because the meetings are "on hiatus until further notice"
[04:43] <ericsnow> wallyworld_: you think you could spare me a review on https://github.com/juju/juju/pull/736?
[04:43] <ericsnow> wallyworld_:  It's all about a new API facade (something you appear to have fresh in your mind).
[04:43] <jam> but since we've gotten reviews up, we'll want to make sure they know the ball is in their court
[04:43] <wwitzel3> jam: ok
[04:43] <jam> wwitzel3: I have an account, I'm trying to find if I can give you access without giving you my personal stuff
[04:43] <wwitzel3> jam: rgr
[04:43] <wallyworld_> ericsnow: sure, just doing some support in #juju, will look as soon as i can
[04:44] <wwitzel3> jam: oh, since I have you :) I need help
[04:44] <jam> wwitzel3: did you see what config they take? Is it just username + password ?
[04:44] <ericsnow> wallyworld_: you might also take a look at the server side of that I just landed to make sure I got it right
[04:44] <ericsnow> wallyworld_: no worries
[04:44] <ericsnow> wallyworld_: I'm going to bed anyway :)
[04:44] <wwitzel3> jam: yes, username: and password: and region: are the config options
[04:44] <wallyworld_> ok, sleep well
[04:44] <ericsnow> davecheney: thanks for all your reviews
[04:46] <wwitzel3> brb
[04:47] <davecheney> ericsnow: np
[04:47] <jam> wwitzel3: see my PM
[05:04] <wwitzel3> jam: so, for a PR I have up which adds an option to juju run, axw though I might need to version the cmd/juju API for juju run
[05:04] <wwitzel3> jam: but I couldn't find an existing example of doing that
[05:04] <wwitzel3> jam: I pinged some other people and they all said ping you ;) so you win!
[05:05] <wallyworld_> axw: i didn't use the facade patch because i was just copying across existing code. i did migrate the server side tests to *not* go through the client so i guess i could have done that extra change too
[05:07] <axw> wallyworld_: ah I see
[05:07] <axw> wallyworld_: btw, in this one case I was pointing out that patching could be avoided altogether
[05:07] <axw> if we just don't pass in *api.State
[05:08] <wallyworld_> yeah, i thought about doing that actually, but then fell asleep last night and forgot to pick it up later this morning
[05:08] <wallyworld_> i'll make that change
[05:09] <wallyworld_> axw: i have to enhance the tests in the next branch when placement directives are actually used, so i'll look at using the testing facade then
[05:09] <axw> wallyworld_: SGTM
[05:21] <jam> wwitzel3: so, TheMue is currently working on versioning the Agent API, which will give us an example of how it is to be done. It is unfortunately not trivial, but I do want us to get in the habit of doing it.
[05:22] <jam> wwitzel3: registering a new facade that is a higher version is trivial
[05:22] <jam> wwitzel3: doing the correct testing so that you test we actually expose exactly the v0 and the v1 implementations is the harder part
[07:04] <TheMue> morning
[07:05] <TheMue> ah, reading about optimal testing for versioning, fine
[07:53] <TheMue> jam: btw, regarding the testing in my case we sadly have no "hey this method is new" neither "oh, this method changed". we're only returning one more possible value as job. so there's a separation between v0 and v1 tests, but it's not the best demonstrator. *sigh*
[07:53] <jam> TheMue: yeah, understandable
[08:00] <TheMue> jam: but we now know the direction, so there soon will be a better one too. and it's enough to document it.
[09:26] <voidspace> trying to remember my stack overflow login :-/
[09:26] <voidspace> I think it was with openid
[09:26] <voidspace> which I don't think they support any more
[09:27] <voidspace> hah, so they do - but my openid provider has gone away
[09:27] <voidspace> luckily I delegate so I can fix that
[09:28] <dimitern> davecheney, jam, tasdomas, others? state changes to allow opening/closing port ranges on units and the openedPortsWatcher in state: https://github.com/juju/juju/pull/739
[09:29] <dimitern> tasdomas, this is a slightly modified version of your PR including only changes in state
[09:29] <davecheney> dimitern: OTP
[09:29] <dimitern> there will be 3 more
[09:29] <davecheney> will have a look soon
[09:29] <dimitern> davecheney, sure, np
[10:00] <tasdomas> dimitern, looking
[10:04] <jam> dimitern: looking as well
[10:04] <dimitern> thanks guys!
[10:05] <tasdomas> davecheney, could you take another look at https://github.com/juju/juju/pull/640 ?
[10:33] <jam> voidspace: should we be disabling the IPv6 test since we know it is flaky until you fix it?
[10:33] <voidspace> jam: yep
[10:34] <jam> voidspace: can you propose that?
[10:34] <voidspace> jam: yep
[10:34] <jam> voidspace: do you say anything but yep?
[10:34] <voidspace> jam: yep
[10:34] <voidspace> ...
[10:34] <jam> :)
[10:34] <jam> I was hoping you'd go for it
[10:34] <voidspace> jam: :-)
[10:34] <voidspace> jam: I was going to disable it as part of this mp
[10:34] <voidspace> jam: but...
[10:35] <voidspace> jam: I'm deferring to mongo support for this - so in the meantime disabling that test is a quicker fix for the instability
[10:35] <jam> I was just thinking about how much wallyworld is happy that things are going better, and I'd like us to support that
[10:35] <voidspace> jam: cool
[10:35] <voidspace> and I agree
[10:36] <voidspace> hmmm... my post to mongodb-user hasn't shown up
[10:36] <voidspace> I know google groups is slow but it was quite a while ago and there's been another post since :-/
[10:37] <jam> voidspace: sounds more like a "you have been put into the moderated queue" sort of change.
[10:37] <jam> voidspace: what were you posting there ?
[10:37] <voidspace> jam: possibly, but it's a google group
[10:37] <voidspace> jam: so I backed out the "wait until I can see the config" change I made yesterday because it just didn't work
[10:38] <voidspace> jam: and the IPv6 test was passing most of the time except when it failed for the known reason
[10:38] <voidspace> jam: so I ran all the tests...
[10:38] <voidspace> jam: and the AddRemoveSet test (non-ipv6) still fails sometimes
[10:38] <voidspace> jam: with "majority of servers must be up"
[10:38] <voidspace> jam: (the ipv6 test *never* fails with that - because it's a bit slower I think)
[10:39] <voidspace> jam: but it means that my CurrentStatus approach for telling when the replicaset is ready
[10:39] <voidspace> jam: *isn't* correct
[10:39] <voidspace> :-(
[10:39] <voidspace> jam: so I'm asking what is the *right* way to tell when the replicaset is ready
[10:39] <voidspace> jam: and I think it's the same issue as the config one - after applying the config the replicaset can report that everything is fine
[10:39] <voidspace> jam: whilst the config change is still propagating
[10:40] <voidspace> jam: and *then* things can become unstable
[10:40] <voidspace> jam: but that's a surmise
[10:40] <voidspace> jam: so I want to ask both questions
[10:40] <voidspace> I posted the first but haven't seen it arrive
[10:40] <jam> voidspace: sounds reasonable to get feedback
[10:41] <voidspace> the second is "is it expected that calling replSetReconfig takes some time
[10:41] <jam> voidspace: you could also try: #mongodb
[10:41] <voidspace> and how can I tell when it's completed
[10:41] <voidspace> yeah, I'm in there now and about to
[10:41] <voidspace> but I figure the americans mostly won't be online yet
[10:43] <jam> voidspace: those lazy bastards!
[10:43] <voidspace> :-)
[10:46] <voidspace> jam: stdup?
[10:54] <voidspace> jam:         mgo.SetDebug(true)
[10:54] <voidspace>         mgo.SetLogger(c)
[11:04] <perrito666> morning juju-ers
[11:10] <rogpeppe1> so, this reviewboard thing: do the review comments actually end up in github, or is it an entirely independent comment-storage system?
[11:15] <perrito666> lol, some nerd humor for the am https://pbs.twimg.com/media/BxNQl-LIEAARcVn.jpg
[11:15] <perrito666> rogpeppe1: as I understood they dont end up in gh
[11:16] <perrito666> since this system promises to stop the mail spam I assumed it would not send every comment to gh
[11:16] <rogpeppe1> perrito666: what language is that?
[11:18] <perrito666> rogpeppe1: I dont think its a language, the syntax highlighting hints that its not valid at all
[11:18] <perrito666> It would be fun to have that compile though
[11:18] <perrito666> I am easily amused
[11:19] <rogpeppe1> perrito666: i don't get it at all :-\
[11:19] <voidspace> I've now switched my openid provider to google and I can login to stackoverflow again!
[11:19] <voidspace> rogpeppe1: perrito666: morning
[11:19] <perrito666> voidspace: good morning
[11:19] <rogpeppe1> voidspace: hiya
[11:19] <perrito666> rogpeppe1: do you know the song?
[11:19] <rogpeppe1> perrito666: ah, a song! no, i don't think so.
[11:20] <perrito666> rogpeppe1: google rammstein du hast
[11:20] <perrito666> https://www.youtube.com/watch?v=-gZ25MYwWpM <-- its quite old
[11:20] <perrito666> 97
[11:22] <perrito666> as I understand the song makes no much sense when translated and, at least around here, its like their only known hit
[11:23] <rogpeppe1> perrito666: enjoying it
[11:24] <rogpeppe1> perrito666: ta
[11:24] <perrito666> rogpeppe1: :)
[11:25] <urulama> perrito666, rogpeppe1: i guess you know this already, but it's playing with words, which when spoken change from "hate" to "have"
[11:26] <perrito666> urulama: I do not speak a word of german so actually for me its just a catchy tune
[11:26] <rogpeppe1> urulama: i didn't know that. i don't know any german.
[11:26] <natefinch> ericsnow: what's the status on reviewboard?  Is there anything we need to do before we can use it?
[11:27] <rogpeppe1> nice light listening compared to what i was listening to just now :-)
[11:29] <perrito666> rogpeppe1: you do not seem the extremely heavy metal person
[11:29] <rogpeppe1> perrito666: you'll be surprised!
[11:30] <rogpeppe1> perrito666: was just listening to Meshuggah
[11:30] <perrito666> rogpeppe1: btw, shall I bring a portable backgamon game?
[11:30] <rogpeppe1> perrito666: definitely!
[11:31] <perrito666> rogpeppe1: I have a bunch of magnetic chess games in the house, I assume all of those have bg boards in the back
[11:31] <perrito666> :p
[11:35] <rogpeppe1> perrito666: i think i might have one too. i might be tempted to bring a proper board along, as it's much more pleasant to play with...
[11:35] <perrito666> rogpeppe1: ah, but can you play upside down? :p or in a car?
[11:36] <perrito666> or in outer space
[11:36] <rogpeppe1> perrito666: good point. i will bear that in mind next time i'm on a spaceship with an urgent desire to play backgammon
[11:38] <rogpeppe1> :-)
[12:02] <voidspace> jam: answer in #mongodb
[12:02] <voidspace> jam:  replSetGetStatus reports each members view of the world, not a consolidated one.
[12:02] <voidspace> jam: so it's not an objective status - it's the status according to whichever node we're talking to
[12:02] <voidspace> jam: so really we need to be asking all nodes
[12:03] <voidspace> jam: plus "the mgo driver should be tracking the state of the replica set members by calling isMaster on each to detect when one of them reports "master:true" meaning it can take writes."
[12:04] <voidspace> jam: which is slightly unrelated to the "majority up" issue
[12:06] <cmars> jam, got a few minutes to chat? wanted to revisit the login API PR, https://github.com/juju/juju/pull/392
[12:08] <jam>  voidspace: so is CurrentStatus using IsMaster calls, right?
[12:11] <voidspace> jam: not directly, not our IsMaster
[12:11] <voidspace> jam: it directly calls replSetGetStatus
[12:11] <jam> voidspace: so I wonder if it would be useful to also call ismaster
[12:11] <jam> maybe ?
[12:11] <voidspace> jam: we need to talk to *all* the replica set members
[12:11] <voidspace> jam: as they can all have a different view on the world
[12:12] <voidspace> jam: skot on #mongodb is pretty sure that *mgo* is already doing this
[12:12] <voidspace> for health monitoring
[12:12] <jam> voidspace: so it *is* talking to all of them, that doesn't mean it is polling the replicaSet data for all of them
[12:12] <jam> voidspace: it uses ismaster calls
[12:12] <jam> see cluster.go
[12:14] <voidspace> jam: we can poll all of them for data and wait until they converge
[12:14] <voidspace> jam: but skot thought that waiting until one reports isMaster could be enough
[12:14] <voidspace> jam: which mgo is already doing from the sounds of it
[12:15] <voidspace> although that *may* not be enough for config changes (even if it's enough for writes)
[12:15] <voidspace> I will see
[12:15] <voidspace> testing this could be fun
[12:15] <voidspace> jam: I'm going to propose a branch disabling the ipv6 test and then come back to this after lunch
[13:10] <natefinch> ericsnow: you around?
[13:37] <mattyw> anyone else didn't know about gofmt -s. Or is it just me?
[13:38] <natefinch> mattyw: I knew about it but never used it... I'd be interested to see how it changes things.
[13:38] <mattyw> natefinch, one example: https://github.com/juju/juju/pull/676#discussion_r17340794
[13:41] <ericsnow> natefinch: just got on
[13:42]  * perrito666 aliased gofmt as omgf because he misstiped that too often
[13:42] <natefinch> mattyw: ahh that's cool
[13:42] <natefinch> ericsnow: cool.... so, what do you think is left with reviewboard before we can go live?
[13:43] <ericsnow> natefinch: just the things I listed in that email (SSL, backups, redundancy)
[13:44] <natefinch> ericsnow: yeah, but those aren't really needed.... are we worried someone's going to spoof our reviewboard site and make us review different code? :)  Also - it doesn't actually hold the code or anything, so it's not like backup and redundancy are really super critical.  Redundancy would be nice, but I don't think we need to gate on it.
[13:46] <rogpeppe1> natefinch: i think the reviews are worth backing up - the context of a change is sometimes as important as the change itself
[13:46] <ericsnow> natefinch: I don't think SSL or backups are going to be a super heavy lift and will be worth taking an extra few days to get them up
[13:46] <dimitern> jam, tasdomas, if you have a minute, can you take a final look at https://github.com/juju/juju/pull/739/ ?
[13:47] <dimitern> that is, if jam's still around
[13:47] <rogpeppe1> natefinch: BTW one thing i really liked about the old codereview system is that you could look at a commit and it linked directly to the review (with all its steps visible). is that going to be the case with reviewboard?
[13:47] <rogpeppe1> natefinch: (i'm still finding that incredibly useful for finding out why some piece of old code is the way it is)
[13:48] <rogpeppe1> ericsnow: ^
[13:48] <natefinch> rogpeppe1: for a level of indirection yes.  The PR should be updated with a link to the review on reviewboard.
[13:48] <natefinch> (currently this will be manual)
[13:48] <rogpeppe1> natefinch: we should make that automatic otherwise noone will do it
[13:49] <ericsnow> rogpeppe1, natefinch: for now everyone will be manually adding a link to the review in a PR comment
[13:49] <ericsnow> rogpeppe1: we will work on automating that which is a reasonably tractable problem
[13:49] <rogpeppe1> ericsnow, natefinch: the other thing that i'm really hoping reviewboard offers, but can't easily work it out from the site is: if i make a comment and someone makes a change, can i easily see the change that's been made in response to my comment?
[13:50] <dimitern> natefinch, or perhaps you can have a look instead? https://github.com/juju/juju/pull/739/
[13:50] <rogpeppe1> that's something i miss every time i do a github review
[13:51] <ericsnow> rogpeppe1: more or less; each update to the review request shows up as a selectable link and you can easily diff between versions of the review request in the web UI
[13:51] <natefinch> ericsnow: if you think you can get backup and SSL done in a few days, that's cool... but do we actually need to gate on it?  Can you do it in flight, or will it require a wipe to deploy?
[13:51] <ericsnow> rogpeppe1: so if someone updates a change (even via rebase) you can see what they changed
[13:52] <rogpeppe1> ericsnow: cool
[13:52] <rogpeppe1> ericsnow: i guess there are no reviews up there where there have been multiple changes made in response to review comments, so it's difficult to see
[13:53] <ericsnow> natefinch: in flight, but I want to wipe before we officially switch over to clear out the testing that people have been doing
[13:53] <ericsnow> rogpeppe1: try it out :)
[13:54] <ericsnow> rogpeppe1: I did it but have since removed that review request
[13:54] <natefinch> ericsnow: then let's wipe and switch over.  No backups or SSL for a few days seems like no big deal
[13:55] <ericsnow> natefinch: I at least want the SSL done since it will change the URL
[13:55] <rogpeppe1> ericsnow: another thing: for large reviews, is it possible to get a file-by-file summary without seeing all the diffs in the same page?
[13:55] <natefinch> ericsnow: can't we auto-forward http to https?
[13:55] <ericsnow> natefinch: then folks don't have to update their bookmarks, etc.
[13:55] <ericsnow> natefinch: I guess
[13:56] <natefinch> ericsnow: we should be doing that anyway
[13:56] <ericsnow> natefinch: also I was hoping we could get a few more days of people trying reviewboard out before switching over
[13:56] <natefinch> ericsnow: meh. No one's trying it now, and they won't until we force them to.  I trust you and wayne have tried it out and not found anything hugely lacking.
[13:57] <natefinch> ericsnow: and we can always go back if there's some deal breaker we fin
[13:57] <natefinch> find
[13:58] <ericsnow> natefinch: yeah, it's not a huge thing; I just wanted to minimize the possible disruption so I figured waiting until Monday was the best balance between that and switching over ASAP
[13:58] <tasdomas> dimitern, LGTM
[13:59] <tasdomas> dimitern, have you tried bootstrapping an env with that code?
[13:59] <dimitern> tasdomas, hmm, good point, will do now
[13:59] <dimitern> tasdomas, thanks
[14:00] <tasdomas> dimitern, it's probably best to try, just in case some code path is not actually tested
[14:00] <tasdomas> dimitern, I've also found the mongodb charm to be a good test case
[14:01] <natefinch> ericsnow: I hate waiting.  But if you think it's best to wait, I'll trust your decision.  How sure are you about being ready Monday?
[14:02] <ericsnow> natefinch: either way I think Monday is a good goal; I'll at least focus on getting SSL sorted out
[14:03] <ericsnow> ...by then
[14:03] <ericsnow> natefinch, wwitzel3, perrito666: standup
[14:15] <perrito666> natefinch: standup?
[14:44] <voidspace> jam: I believe you're reviewing today
[14:44] <voidspace> jam: tricky one for you
[14:44] <voidspace> https://github.com/juju/juju/pull/740
[14:59] <natefinch> voidspace: LGTM'd
[14:59] <voidspace> natefinch: thanks
[15:01] <voidspace> niemeyer: ping
[15:15] <perrito666> natefinch: cool, guess which country is the first one in the list of not supported countries for hangouts calls :p
[15:16] <natefinch> perrito666: only because your country starts with A
[15:17] <perrito666> we are the only country with an A in that list, such an honor
[15:17] <perrito666> :p
[15:18] <natefinch> perrito666: well, that's why you're first :)
[15:36] <mattyw> folks - do we have a way of mocking out the api clients yet?
[16:05] <perrito666> ok, I am grepping a lot here, does anyone know what is the portion of code that generates the actual path to deploy a charm?
[16:07] <ericsnow> jam: could you give me a review for https://github.com/juju/juju/pull/736?
[16:13] <niemeyer> voidspace: Hey, just read your email
[16:18] <voidspace> niemeyer: just seen your reply
[16:19] <niemeyer> voidspace: Commented on the ticket as well
[16:19] <voidspace> niemeyer: I saw
[16:19] <voidspace> niemeyer: the issue is that we setup the replicaset config through session.Run(bson)
[16:20] <voidspace> niemeyer: and the addresses have to be serialised in the bson with the incorrect form
[16:20] <voidspace> I believe
[16:20] <niemeyer> voidspace: and how's that an issue?
[16:20] <voidspace> niemeyer: ah, I think I misunderstood your reply
[16:21] <voidspace> niemeyer: what you're saying sounds exactly like what John is saying
[16:21] <voidspace> niemeyer: I'm fixing our functions so that we only use correct addresses outside of mgo / replicaset (which is intended to go into mgo I believe)
[16:22] <voidspace> niemeyer: so we (I) need to find the places in cluster.go where we have addresses in the wrong format and fix them
[16:25] <niemeyer> voidspace: IIRC, there's just one way to get server addresses from the MongoDB side.. I can easily fix it
[16:26] <voidspace> niemeyer: ok
[16:26] <niemeyer> voidspace: One of the things we have to understand, though, is whether those addresses _always_ have a port or not
[16:26] <voidspace> niemeyer: we *always* send them with a port
[16:27] <niemeyer> voidspace: That's not the same thing, though
[16:27] <voidspace> niemeyer: heh, right
[16:33] <niemeyer> voidspace: Do you have an ipv6 deployment at hand?
[16:36] <alexisb> perrito666, I will be a bit late to our 1x1 (~15 mins) which means wwitzel3 I will also be late to our 1x1
[16:37] <alexisb> will ping you guys when Iam ready
[16:37] <voidspace> niemeyer: no
[16:37] <voidspace> niemeyer: I use replicaset/replicaset_test.go which uses the ipv6 addresses
[16:37] <voidspace> niemeyer: and fails sometimes
[16:38] <niemeyer> voidspace: Why sometimes? The problem described is deterministic, right?
[16:38] <voidspace> niemeyer: right, but what doesn't seem to be deterministic is whether or not the replicaset operations cause a syncServers
[16:39] <voidspace> niemeyer: if the test always failed we wouldn't have committed it
[16:39] <voidspace> niemeyer: it's probably timing related as it fails more often in CI, which runs on a slower system
[16:39] <niemeyer> voidspace: I have no idea about what the test does, but there's something fishy going on there
[16:40] <voidspace> niemeyer: it is odd
[16:40] <niemeyer> voidspace: This problem is deterministic.. either the address is parsed, or it is not
[16:40] <voidspace> niemeyer: yep, I agree
[16:40] <voidspace> niemeyer: I added an extra log line to mgo - the actual error from net.DialTimeout
[16:40] <voidspace> and when the test fails, that log line was showing "too many colons in address"
[16:41] <voidspace> I am *sometimes* seeing in the non-ipv6 version of the test this failure
[16:41] <voidspace>     &mgo.QueryError{Code:13144, Message:"exception: need most members up to reconfigure, not ok : localhost:36246", Assertion:false} ("exception: need most members up to reconfigure, not ok : localhost:36246")
[16:41] <voidspace> indicating members that are in the replicaset are reporting as being down
[16:41] <voidspace> (this is not as frequent a failure)
[16:42] <voidspace> this maybe a total red herring
[16:42] <voidspace> and a different issue
[16:42] <voidspace> brb, need coffee
[16:43] <niemeyer> voidspace: When the ipv6 test does fail on some system, does it fail consistently?
[16:45] <voidspace> niemeyer: nope, I run it a few times and it fails "sometimes"
[16:45] <voidspace> CI sees the same thing - it fails, re-run and it passes
[16:47] <niemeyer> voidspace: Ok, there's definitely something else at play than the addresses then
[16:47] <perrito666> alexisb: np just ping me on irc
[16:52] <voidspace> niemeyer: when it dies, it dies with "no reachable servers" - which is a message from mgo when all the net.DialTImeout fail
[16:52] <voidspace> niemeyer: and logging shows that the actual error from net.DialTimeout is "too many colons in address"
[16:52] <niemeyer> voidspace: Not quite.. it's a message when it cannot find a reachable server
[16:52] <voidspace> niemeyer: so why it *passes* sometimes I can't tell you
[16:53] <voidspace> niemeyer: but when it fails, it *is* from the address
[16:53] <niemeyer> voidspace: Which can happen for any other reason too
[16:53] <voidspace> but I added logging to show the actual error
[16:53] <voidspace> as mgo discards the actual error
[16:53] <niemeyer> voidspace: Ok, so when it does pass, what addresses is it looking at, and why have they changed since the last run?
[16:54] <voidspace> niemeyer: I can add logging with the address and see what it's using when it passes
[16:54] <voidspace> shortly
[16:54] <voidspace> that will be interesting
[16:54] <niemeyer> voidspace: That'd be appreciated, thanks!
[16:55] <niemeyer> voidspace: On my side, I've set up a RS on ipv6 and can confirm the addresses always come in the bad format with a port, from the command that mgo uses to obtain them
[16:55] <niemeyer> Even if no port was specified
[16:55] <voidspace> cool
[16:56] <niemeyer> voidspace: Also interestingly, ::1 does not exercise the bug
[16:56] <niemeyer> voidspace: Because mongo converts it into the local hostname
[16:56] <niemeyer> voidspace: Which is why I asked about consistency.. it might explain why it fails in some cases, but it cannot explain why it would alternate in consecutive runs in the same system
[17:02] <mattyw> night folks
[17:30] <wwitzel3> alexisb: ready when you are :)
[17:31] <perrito666> wwitzel3:
 16:30:09> perrito666, I will be a bit late to our 1x1 (~15 mins) which means wwitzel3 I will also be late to our 1x1
 16:30:16> will ping you guys when Iam ready
[17:31] <alexisb> perrito666, yeah
[17:32] <alexisb> yeah
[17:32] <alexisb> I know
[17:32] <alexisb> still on
[17:32] <perrito666> alexisb: no hurry I was just answering to wwitzel3 in case you where not here
[17:32] <wwitzel3> ahh I didn't get the highlight with it in sentence like that :/
[17:32] <wwitzel3> need to fix that
[17:33] <wwitzel3> thanks perrito666
[17:33] <perrito666> wwitzel3: heh, I have a regex that higlights all possible mentions of my name, very useful
[17:34] <perrito666> wwitzel3: btw, stil waiting for go to definition
[17:34] <wwitzel3> oh right
[17:35] <wwitzel3> perrito666, ericsnow: https://github.com/dgryski/vim-godef
[17:35] <perrito666> wwitzel3: ta
[17:36] <ericsnow> wwitzel3: nice
[17:36] <natefinch> perrito666, wwitzel3: I think even the author of that plugin now uses https://github.com/fatih/vim-go
[17:37] <perrito666> why is that everyone uses screenshots of vi in mac for docs
[17:37] <perrito666> vi does not look even close to that on other os
[17:37] <wwitzel3> perrito666: mine does :)
[17:38] <wwitzel3> perrito666: different chrome, but mine looks almost identical shading and color scheme
[17:38] <perrito666> wwitzel3: nah I doubt you have such a nice font rendering
[17:38] <wwitzel3> perrito666: I used Ubuntu Mono under Mac, so I didn't notice a difference
[17:39] <perrito666> wwitzel3: ah, that might be the reason, but on good displays it is really interesting the difference, fonts are very nice, I currently use fisa-vim-config and I am really happy with it to be honest
[17:39] <alexisb> ok perrito666 I am ready and joining the hang out
[17:40] <wwitzel3> perrito666: I use wwitzel3-vim-config ;)
[17:45] <wwitzel3> natefinch: thank you
[17:45] <wwitzel3> natefinch: was able to remove 3 bundles and replace it with that one
[17:46] <natefinch> wwitzel3: nice, yeah, he's a twitter-friend and I constantly see people saying how awesome vim-go is.  It's almost enough to make me want to try vim.   Almost.
[17:46] <rick_h_> vim ftw!
[17:46] <rick_h_> vim your zsh and double the win!
[17:47] <wwitzel3> rick_h_: I still haven't mangaged to care enough to try zsh yet, but then again, I use gnome-panel under xmonad :P
[17:47] <rick_h_> wwitzel3: oh man, remind me to blow your mind in brussels, especially if you're a vim person
[17:47] <katco> emacs eclipses all!
[17:48] <rick_h_> you use emacs with eclipse? You're crazy :P
[17:48] <katco> haha
[17:48] <wwitzel3> katco: it would be cool to have lisp generate all my code for me ..
[17:48] <rick_h_> or you mean emacs was out eclipsing eclipse before java was cool?
[17:48] <katco> wwitzel3: i am actually a very complicated lisp macro.
[17:48] <rick_h_> oh the jokes every die
[17:48] <rick_h_> first I get wwitzel3 into a geekdesk, next up, zsh
[17:49] <rick_h_> like water pollution the ideas spread :P
[17:54] <wwitzel3> rick_h_: to be fair, the geekdesk wasn't a hard sell, I'd been eyeballing them for a couple years and standing for five, but you did tip it with the frame only
[17:55] <wwitzel3> rick_h_: as for zsh .. not sure how you're going to get me to care about what shell I use :)
[17:55] <rick_h_> wwitzel3: oh I'll do it, I've done it before and I'll do it again :)
[18:15] <alexisb> alright wwitzel3 I am ready and joining the hangout
[18:15] <perrito666> bbl bike time
[18:17] <wwitzel3> alexisb: yep, ok
[18:34] <perrito666> hey, is anyone here going to openstack summit in paris and knows the actual difference between full access and keynote + expo
[18:35] <perrito666> ?
[18:45] <natefinch> perrito666: no to both
[18:45] <perrito666> natefinch: thanks for that default answer :p
[18:45] <natefinch> perrito666: heh figured that was better than no answers :)
[18:46] <perrito666> natefinch: yup
[18:47] <perrito666> well its a pretty steep difference, I surely miss things like the plone conf, those where cheap, since no one but the exact same people wanted to go to those year after year
[18:48] <natefinch> perrito666: lol plone
[18:48] <perrito666> natefinch: we all have a dark past
[18:49] <perrito666> I believe people are still using that
[18:50] <perrito666> I've heard that if you go deep enough into the abstraction layers you get to narnia
[18:50] <natefinch> perrito666: haha
[18:54] <perrito666> natefinch: hazmat was there too, I saw him
[18:54] <natefinch> perrito666: I have heard he lives in Narnia
[18:55] <perrito666> natefinch: yeah, he also did a payment system for narnia
[18:55] <perrito666> which I had to maintain for years :p
[18:55] <natefinch> lol
[20:36] <sebas5384> hello! i'm looking for a documentation of the juju socket api
[20:37] <sebas5384> somebody haves some favorite one?
[20:39] <sebas5384> i was looking this https://github.com/Ubuntu-Solutions-Engineering/macumba/blob/master/macumba/__init__.py
[21:25] <wallyworld_> sinzui: i have code to be able to read tools from /v2 - do you know the timeline for publishing metadata to that path?
[21:26] <sinzui> wallyworld_, I don't. We have some issues sorting out mirrors and syncing
[21:26] <wallyworld_> ok, i'll just have it queued up, ready to go when we are ready
[21:30] <wallyworld_> sinzui: also, did we want to use paths like http://streams.canonical.com/juju/tools/<tag>/streams/v1, where <tag> is released, proposed, testing etc
[21:30] <wallyworld_> i like that better - keep tools as top level
[21:30] <sinzui> wallyworld_, I like that suggestion
[21:31] <wallyworld_> sinzui: ok, i can add a config setting to allow tag to be set. i should say that it will end in v2 also
[21:31] <sinzui> wallyworld_, lets not rush
[21:31] <wallyworld_> yeap, just thinking out loud
[21:32] <wallyworld_> first plan is to get v2 in place, so we can release 1.21
[21:32] <wallyworld_> sinzui: did we want to publish to juju/tools/released/streams/v2 to start with
[21:33] <wallyworld_> if we are changing the path anyway to get 1.21 out
[21:33] <sinzui> wallyworld_, CI 's release process is not arbitrary. Every version we test has 5 streams published. I need to think about how we find packages, makes tools, store them temorariiy or permanently, then ensure syncing only does was we intend, particuarly when my computer, CI and streams need to stay in sync
[21:34] <wallyworld_> yep. i'm just trying to ensure we have common agreement on what juju needs to do first up to unblock 1.21 so i can have things ready when needed
[21:35] <sinzui> wallyworld_, I do like your <tag> suggestion. I never liked the sibling /testing/ we use and I need /proposed/ too
[21:35] <wallyworld_> i don't plan aon landing anything until all the pieces are lined up
[21:36] <wallyworld_> ok, i'll make sure juju, when required to, can be flipped to get tools from juju/tools/released/streams/v2 to start with
[21:37] <sinzui> wallyworld_, If we publish separate streams, or ones with tags, we don't need v2
[21:37] <wallyworld_> that is true
[21:37] <voidspace> niemeyer:
[21:37] <voidspace> niemeyer: ping
[21:38] <sinzui> wallyworld_, I like tags because it sources like a single tree to maintain and sync
[21:38] <wallyworld_> sinzui: ok, i'll stick with v1, but with a default "released" tag in the path
[21:38] <wallyworld_> agreed
[21:38] <wallyworld_> sinzui: it also aligns with how image metadata is sourced
[21:39] <wallyworld_> there's already an image-stream setting
[21:39] <wallyworld_> we'll have a tools-stream also i imagine
[21:39] <niemeyer> voidspace: Hey
[21:39] <voidspace> niemeyer: hey
[21:39] <voidspace> niemeyer: so when I log the address and have a successful run
[21:39] <voidspace> niemeyer: I see a bunch of failures due to the ipv6 addresses
[21:39] <voidspace> niemeyer: but a single address "localhost:port"
[21:39] <voidspace> niemeyer: possibly the root server
[21:40] <voidspace> niemeyer: and as that can be contacted successfully it passes
[21:40] <sinzui> wallyworld_, abentley reviews my "proposed" branch and pointed out that I failed to take into account diverging sets of tools for releases, proposed, and testing.
[21:40] <voidspace> niemeyer: why we sometimes *don't* see that I don't know - it hasn't failed in the last few runs
[21:41] <niemeyer> voidspace: Ok, so the question remains
[21:41] <wallyworld_> sinzui: i'm not sure i follow - do you have a quick example?
[21:42] <voidspace> niemeyer: yeah, I'm digging in a bit - I have a talk to work on too so I may have to continue tomorrow
[21:43] <sinzui> wallyworld_, when we choose to assemble tools, we need to know its purpose to put it in the right place. The idea of proposed is that it will contain everything we intend to publish to released, which includes things we wont release because of defects
[21:43] <niemeyer> voidspace: No problem
[21:43] <niemeyer> voidspace: The described problem needs to be fixed no matter what
[21:43] <sinzui> wallyworld_, a devel might only contain last stable and recent devel releases
[21:44] <voidspace> niemeyer: it's worrying if it fails because sometimes that *one* server is really unreachable
[21:44] <voidspace> niemeyer: and we have unexplained unreachable servers
[21:44] <voidspace> so it's worth pursuing I think
[21:45] <sinzui> wallyworld_, so the official "released" tools will be different from proposed, by some percentage, and if we do a devel set of streams, it will be very divergent
[21:45] <niemeyer> voidspace: Right exactly
[21:45] <wallyworld_> sinzui: yes, there will be different metadata for each <tag>. all the tools tarballs could be in the one path, pointed to by different metadata
[21:45] <sinzui> wallyworld_, testing streams will continue to be released plus the version we are testing
[21:45] <niemeyer> voidspace: It's worth debugging not because that one bug isn't a bug.. it definitely is and must be fixed. It's worth debugging because there might be a _different_ issue.
[21:46] <voidspace> niemeyer: agreed
[21:46] <sinzui> wallyworld_, okay, but how does "juju metadata generate-tools" know which are proposed, released, devel, testing, when they are ail in the same path (is you mean tools/releases)
[21:48] <sinzui> wallyworld_, the release scripts are making tools, and placing the historic of the tools into a directory, and running the metadata command, not it needs to do thos for many directories, or maybe you mean juju will know about each purpose and make the metadata for all of them
[21:49] <sinzui> wallyworld_, the simplest change is to not change juju, only we...Juju QA/Canonical...need specialised streams
[21:50] <sinzui> wallyworld_, Juju can do nothing, and the assemble-public-tools learns about purpose to make a tree with several streams
[21:51] <sinzui> wallyworld_, publish-public-tools needs to change, but maybe it can be simplified to sync all streams instead of one.
[22:26] <sinzui> wallyworld_, I would prefer Juju devel to know exactly where to pickup streams rather than me telling me people to set tools-metadata-url. But I will always need to do that with proposed streams because the client/tools will be copied to released. Devel knows it is devel so we can make to look in  /devel/tools/ or tools/devel/. For users to test upgrades from stable, they need to set tools-metadata-url anyway.
[22:55] <davecheney> waigani: menn0 email standup today ?
[22:55] <menn0> davecheney, waigani: I'm happy to
[22:55] <waigani> davecheney: yep
[22:55] <davecheney> kk
[23:08] <davecheney> LOG] 0:00.467 INFO juju.apiserver [79] user-admin@local API connection terminated after 180.030205ms
[23:08] <davecheney> [LOG] 0:00.468 INFO juju.apiserver [7A] unit-wordpress-0 API connection terminated after 35.127121ms
[23:08] <davecheney> i like this way this looks
[23:17] <menn0> davecheney: looks good to me too
[23:19] <menn0> davecheney: can you do a meta-review of https://github.com/juju/juju/pull/738 pls
[23:33] <menn0> davecheney: thanks for the review