[00:05] <davecheney> fwereade_: pin
[00:05] <davecheney> ping
[01:27] <davecheney> does goose support the lcy02 region ?
[01:42] <bigjools> perhaps wallyworld_ would know that
[01:42] <wallyworld_> yes, it should
[01:43] <wallyworld_> i've only used the lcy01 region myself, but there's no reason lcy02 should not work
[01:43] <davecheney> wallyworld_: i'm trying to work around the lack of public IP's in lcy01
[01:43] <davecheney> how can I use lcy02 ?
[01:43] <wallyworld_> you can delete some i think
[01:43] <davecheney> different keystone url ?
[01:43] <wallyworld_> to return them to the pool
[01:43] <davecheney> wallyworld_: i never had any to start with
[01:44] <wallyworld_> let me check something'
[01:46] <wallyworld_> davecheney: the keystone url should be the same, you just change OS_REGION_NAME
[01:46] <wallyworld_> to use a different rgion
[01:46] <wallyworld_> i normally use "nova floating-ip-list" to see what floating ips are used
[01:46] <wallyworld_> and then delete them so that they can be re-used
[01:47] <davecheney> wallyworld_: what is the config value in juju/environments.yaml that matches OS_REGION_NAME ? region ? (guess)
[01:48] <davecheney> i don't understand what you mean my delete floating ip's
[01:48] <davecheney> i don't own any floating ips to delete
[01:48] <wallyworld_> should be "region" in the yaml
[01:49] <wallyworld_> i didn't think i owned any floating ips either - i thought there was a pool we could all share
[01:49] <wallyworld_> and we were allowed 2 or 3 each
[01:49] <davecheney> trying lcy02, still don't understand about delete
[01:50] <davecheney> 2013/02/11 12:49:23 JUJU environs/openstack: opening environment "canonistack"
[01:50] <davecheney> 2013/02/11 12:49:23 JUJU environs/openstack: bootstrapping environment "canonistack"
[01:50] <davecheney> 2013/02/11 12:49:28 JUJU environs: searching for tools compatible with version: 1.9.8-quantal-amd64
[01:50] <davecheney> 2013/02/11 12:49:32 JUJU juju bootstrap command failed: cannot start bootstrap instance: cannot allocate a public IP as needed
[01:50] <davecheney> error: cannot start bootstrap instance: cannot allocate a public IP as needed
[01:50] <davecheney> lucky(~) % nova floating-ip-list
[01:50] <davecheney> ^ non
[01:50] <wallyworld_> delete deallocates a floating ip so it can be reused
[01:50] <wallyworld_> nova floating-ip-delete <address>
[01:51] <wallyworld_> hmmm. i see no floating ips either
[01:51] <davecheney> wallyworld_: i have no floating ips too delete
[01:52] <wallyworld_> i could have sworn that there were some around last week
[01:52] <davecheney> wallyworld_: https://pastebin.canonical.com/84346/
[01:52] <davecheney> get a weird error when the region is not known
[01:54] <wallyworld_> the error makes sense given how the internals work - authentication worked and it gave backa list of endpoints per region. the region you had didn't match any so there were no endpoints available
[01:54] <wallyworld_> perhaps the error can be reworded
[01:56] <davecheney> wallyworld_: raise a bug ? or leave it for later ?
[01:57] <wallyworld_> a bug would be great
[01:57] <davecheney> no wukks
[01:57] <wallyworld_> i have no idea about the lack of floating ips though
[01:58] <davecheney> i might have to go groveling back to HP an ask to reopen my hp cloud account
[01:58] <wallyworld_> i really think we should get some canonistack ones so we can devel/test easily etc
[01:59] <davecheney> seconded
[01:59] <davecheney> noted in meeting agenda
[01:59] <wallyworld_> \o/
[02:00] <wallyworld_> davecheney: there is a hp account martin has which he has shared the credentials with the blue squad - i can send you those?
[02:03] <davecheney> wallyworld_: please
[02:03] <davecheney> i forget why I closed my account with HPO
[02:03] <wallyworld_> ok, will find email and send
[02:03] <davecheney> I think they screwed me on billing or something
[02:04] <wallyworld_> email sent
[02:07] <davecheney> danku
[02:07] <wallyworld_> np, anytime
[05:59] <jam> wallyworld_: morning!
[05:59] <wallyworld_> hello, mumble?
[05:59] <jam> davecheney: just as an aside, we've been requested to use the AZ3 compute zone for testing, as it is supposed to be the lowest utilization for canonical.
[06:03] <davecheney> jam: right
[06:03] <davecheney> never heard of that one
[06:07] <davecheney> jam: having trouble connecting to that region
[06:07] <jam> davecheney: we're talking about HP now, right?
[06:07] <davecheney> nope
[06:08] <davecheney> i thought you were talking about canonistack
[06:08] <jam> sorry, yeah on HP cloud we want to use az3, on canonistack we *should* work with lcy02, but my understanding is it has been updated to Grizzly, so isn't quite the same as lcy01
[06:09] <davecheney> jam: i cannot bootstrap in any canonistack regions
[06:09] <davecheney> no EYE PEES availble
[06:31] <jam> davecheney: right, go-juju now connects directly to mongo, so it needs to have a floating IP because the ssh tunnelling doesn't work.
[06:35] <jam> and lcy01 is running out of *virtual* IPs (the region is limited to a class C, and apparently we are running out of that address space)
[06:38] <dimitern> mornings :)
[06:58] <rogpeppe> goood morning!
[06:59] <rogpeppe> davecheney: hiya
[07:02] <davecheney> hi rogpeppe
[07:02] <davecheney> i'm just steppig out to the shops
[07:02] <davecheney> back in a bit
[07:02] <rogpeppe> davecheney: nice to see your post to golang-nuts
[07:03] <rogpeppe> davecheney: BTW i've got a couple of CLs out for review that i'd be interested in your comments on
[07:03] <davecheney> rogpeppe: sure, i'll have a look after dinner
[07:07] <rogpeppe> https://codereview.appspot.com/6878052/ and https://codereview.appspot.com/7299066/
[11:26] <TheMue> lunchtime
[11:47] <fwereade_> jam, ping
[11:47] <fwereade_> jam2, ping
[11:47] <jam2> fwereade_: pong
[11:48] <fwereade_> jam, so, I was wondering whether you guys had had an opportunity to look over the get/set/upgrade-charm treatment I sent a few days ago
[11:48] <jam> fwereade_: we haven't looked deeply at it, it isn't in our immediate work
[11:49] <fwereade_> jam, ok, cool -- my understanding was that someone would be starting on that front pretty soon, I guess that was inaccurate
[11:49] <jam> fwereade_: it was on the list of "things we could look at next", but there are enough things we're going to be picking on some of the smaller ones first
[11:49] <jam> ease ourselves into it.
[11:51] <fwereade_> jam, ok -- tbh most of the stuff on there is kinda necessary for parity, so it deserves at least a bit of attention sooner than later
[11:52] <fwereade_> jam, and there are definitely some easy bits there that can help you to get started
[11:52] <fwereade_> jam, so perhaps some of us could have a chat about it after the team meeting tomorrow? I might be able to make the simple starting points a little clearer
[11:53] <fwereade_> jam, this is not to say that I don't appreciate the sanity of your strategy :)
[11:58]  * fwereade_ => lunch
[12:07] <niemeyer> Goood morning
[12:08] <dimitern> niemeyer: hiya
[12:09] <rogpeppe> niemeyer: yo!
[12:35] <jam> mgz: poke for great mumbling?
[12:36] <mgz> hey :D
[12:37] <jam> mramm2: are we doing 8UTC tomorrow, or is that not starting until next week?
[12:37] <mramm2> tomorrow
[14:17] <rogpeppe> fwereade_: concurrent rpc requests: https://codereview.appspot.com/7307090
[14:35] <fwereade_> rogpeppe, cheers
[14:36] <rogpeppe> fwereade_: i'm hoping i might have a review of those other api-related branches at some point :-)
[14:36] <rogpeppe> fwereade_: i have a first watcher implementation mostly done BTW
[15:01] <rogpeppe> fwereade_: g+?
[15:01] <fwereade_> rogpeppe, omw
[15:34]  * rogpeppe goes for some lunch.
[17:16] <niemeyer> rogpeppe: ping
[17:16] <rogpeppe> niemeyer: pong
[17:16] <niemeyer> rogpeppe: Heya
[17:16] <rogpeppe> niemeyer: hiya, how's tricks?
[17:16] <niemeyer> rogpeppe: I've added some further hardening on the multi upload branch
[17:16] <rogpeppe> niemeyer: cool
[17:16] <niemeyer> rogpeppe: There was one bug related to retries that was breaking tests
[17:17] <niemeyer> rogpeppe: Do you want to have a look at the delta before submitting: https://codereview.appspot.com/7237068
[17:17] <rogpeppe> niemeyer: ok, will do
[17:17] <niemeyer> rogpeppe: There shouldn't be anything  controversial there
[17:17] <niemeyer> rogpeppe: Except for some occasional long-time S3 errors, I'm able to run tests quite well now
[17:19] <rogpeppe> niemeyer: i'm looking at it, BTW
[17:20] <niemeyer> rogpeppe: Thanks!
[17:25] <rogpeppe> niemeyer: where was the bug? in S3.prepare, i presume, but i can't see quite what the problem was, or how the changes have fixed it (the new code looks fine BTW, i'm just interested)
[17:25] <niemeyer> rogpeppe: Retrying would corrupt req.path due to how prepare was implemented
[17:27] <rogpeppe> niemeyer: ah, yes, req.path would keep on growing
[17:27] <niemeyer> rogpeppe: THat's right
[17:27] <niemeyer> rogpeppe: Live tests picked it up
[17:27] <rogpeppe> niemeyer: LGTM then
[17:28] <niemeyer> rogpeppe: Awesome, cheers
[18:39] <rogpeppe> that's me for the day
[18:39] <rogpeppe> g'night all
[20:28] <thumper> mramm: is there a calendar that I should be looking at?
[20:28] <thumper> mramm: got an email with an agenda for today
[20:28] <thumper> mramm: but no time specified for the meeting
[20:29] <thumper> mramm: hmm.. also guessing that the date is US local?
[20:29] <thumper> so tomorrow, not today?
[20:33] <mramm> yea
[20:33] <mramm> yea
[20:33] <mramm> I will get you on the calendar invite right away
[20:35] <mramm> you are now on the invite, I'm also adding you to the juju-core calendar which we use for vacations and whatnot
[20:50] <thumper> mramm: ta
[22:15] <hazmat> thumper, welcomes
[22:15] <thumper> hi hazmat
[22:16] <thumper> hazmat: should be interesting :)
[23:09]  * thumper goes in search of more coffee