[04:00] <wallyworld__> arosales: hi, did you try it again?
[04:03] <arosales> wallyworld__, I am actually giving it a go :-)  but my previous juju destroy-environment didn't exit properly. So bootstraping to that environment is giving me issues.
[04:03] <arosales> wallyworld__, thanks for the instructions, btw
[04:03] <arosales> I could just create a new environment . . .
[04:03] <wallyworld__> np, if you had an error code from the failed destroy?
[04:03] <wallyworld__> just manually delete your bucket etc using hp cloud console
[04:04] <wallyworld__> or try destory again, does it work?
[04:06] <arosales> wallyworld__, http://pastebin.ubuntu.com/5606834/
[04:06] <arosales> I am not sure if this is a result of a defunct previous environment or not . . .
[04:06]  * arosales will try with a different environment 
[04:07] <wallyworld__> never seen that error before, sounds like provider-state is fooked. try deleting it and the container it lives in manually
[04:07] <wallyworld__> could very well be related to previous env destroy
[04:08] <arosales> wallyworld__, new environment also fails: http://pastebin.ubuntu.com/5606837/
[04:09] <arosales> I'll delete the manually manually via the hp console.
[04:09] <wallyworld__> looks like control bucket url is wrong
[04:10] <wallyworld__> yeah, just delete everything - compute nodes, control bucket - and try again
[04:10] <arosales> I have     public-bucket-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/AUTH_365e2f1e-aea0-44c5-93f4-0fe2eb1f2bcf
[04:15] <arosales> wallyworld__, manually destroyed environment
[04:15] <arosales> redeployed and got http://pastebin.ubuntu.com/5606848/
[04:15] <wallyworld__> i can't see that public bucket content
[04:15] <wallyworld__> well now it can see your tools etc, so that's a win
[04:16] <arosales> I am actually using your bucket
[04:16] <wallyworld__> um, you may have left over security groups from previous runs
[04:16] <wallyworld__> you can list those and delete
[04:16] <wallyworld__> nova secgroup-list i think
[04:17] <wallyworld__> maybe you can see them from hp cloud console, not sure
[04:17] <wallyworld__> yes, you can
[04:18] <wallyworld__> i had a look with my creds, and there's a few there, maybe 20
[04:18] <wallyworld__> not sure what the limit is
[04:19] <wallyworld__> did you have many groups created?
[04:21] <arosales> deleted all juju created security groups
[04:21] <arosales> re-bootstraped
[04:22] <arosales> http://pastebin.ubuntu.com/5606856/
[04:22] <arosales> still getting a resource access error when using your control bucket. .  .
[04:23] <arosales> wallyworld__, perhaps I have to have my own control bucket?
[04:23] <wallyworld__> yes definitey
[04:23] <wallyworld__> you need credentials to read/write to it
[04:23] <wallyworld__> but my public bucket can be shared
[04:24] <arosales> oh no worries.
[04:24] <arosales> that was my mistake I thought it was public.  I should have checked via the browser first.
[04:24] <wallyworld__> your control bucket url looks wrong though
[04:24] <wallyworld__> it seems to be missing a bit
[04:24] <arosales> I was using the one you gave me via email.
[04:24] <wallyworld__> maybe you can paste your environments.yaml fie
[04:25] <wallyworld__> i didn't give you a control bucket url
[04:25] <wallyworld__> the control bucket url is generated internaly
[04:25] <wallyworld__> you got a public bucket url from me
[04:26] <arosales> sorry I thought step 3 was a public-bucket-url to try
[04:26] <wallyworld__> let me check my email
[04:26] <wallyworld__> yes, step is is public bucket url
[04:26] <wallyworld__> but you said control bucket above
[04:26] <wallyworld__> i think
[04:27] <wallyworld__> in any case, the logged messages seem to show an incorrect control bucket url
[04:27] <wallyworld__> which seems strange
[04:27] <wallyworld__> if you paste your environmnts.yaml file, i can see if it looks ok (remove any passwords if they are in there)
[04:27] <arosales> wallyworld__, does that mean public control bucket's can't be shared?
[04:28] <wallyworld__> there's no such thing as a public control bucket
[04:28] <wallyworld__> there's a public bucket url which can be shared
[04:28] <wallyworld__> and a control bucket which is private to your env
[04:28] <arosales> ah
[04:28] <wallyworld__> the control bucket url is generated internally from the bucket name in config
[04:29] <wallyworld__> the public bucket url is gotten from the published url of a public bucket
[04:30] <wallyworld__> so you create a public bucket (ie container) using the cloud console or equivalent
[04:30] <wallyworld__> and then put the tools there
[04:30] <wallyworld__> and then give people the url
[04:30] <wallyworld__> for the control bucket, that's created by juju
[04:31] <wallyworld__> it uses the control bucket name in config and creates it under yout own credentials
[04:31] <wallyworld__> ie private to you
[04:44] <arosales> wallyworld__, finally success http://15.185.118.228/wp-admin/install.php
[04:44] <wallyworld__> \o/
[04:44] <wallyworld__> so you used my public bucket
[04:44] <wallyworld__> and your own control bucket name
[04:45] <arosales> sorry about the misstep on the confusing the public-bucket-url with the control-bucket
[04:45] <arosales> wallyworld__, correct
[04:45] <wallyworld__> no problem at all, it can be and is confusing
[04:45] <wallyworld__> i'm really glad it worked
[04:45] <arosales> wallyworld__, one last question
[04:45] <wallyworld__> sure
[04:46] <arosales> I was going to send to the list on getting set up on hpcloud
[04:46] <arosales> do you have a preference on using your public bucket?
[04:46] <wallyworld__> that's the $64000000 question
[04:47] <wallyworld__> i set up my bucket using a shared account
[04:47] <arosales> I can also create one as we verified the steps here.
[04:47] <wallyworld__> i'm not sure what our policies around all this are
[04:47] <wallyworld__> the credentials i use were provided by someone else in canonical
[04:47] <arosales> ah
[04:48] <arosales> I'll go ahead and create a public bucket off my account (not shared)
[04:48] <wallyworld__> we sort of need a true public bucket, but who pays for it if you know what i mean
[04:48] <arosales> ya there is the cost part
[04:48] <arosales> I am fine with incurring that cost for now for any users that want to try out hpcloud
[04:49] <wallyworld__> is that something canonical would just do for the long term so people can use juju on hp cloud?
[04:49] <arosales> wallyworld__, many thanks
[04:49] <wallyworld__> np, really pleased you got it working
[04:49] <wallyworld__> for private clouds, the issue goes away
[04:49] <arosales> wallyworld__, possibly I'll need to confirm longterm logistics.
[04:50] <wallyworld__> ok, thanks. if you find out something, can you let us know (us = blue squad and juju-core folks etc)
[04:51] <arosales> wallyworld__, will do. I'll need to investigate a bit, but I will definitely let you know.
[04:52] <wallyworld__> excellent, thanks. it been an open question for us, but till now, only of theoretical value since it wasn't working yet :-)
[04:58] <arosales> wallyworld__, I guess this is treated a little different in aws?
[04:59] <wallyworld__> ok ec2, there's a public bucket http://juju-dist.s3.amazonaws.com/
[04:59] <wallyworld__> someone must  pay for that i think?
[05:00] <wallyworld__> it has been there for ages, so i'm not sure who/how it was set up
[05:00] <arosales> wallyworld__, seems logically , but I am not sure what account it comes out of :-)
[05:00] <wallyworld__> me either
[05:00] <wallyworld__> maybe i was told at one point but cannot recall now
[05:00] <arosales> wallyworld__, I'll check with a few folks to see if I can find that info. Be good to know
[05:01] <wallyworld__> yeah, i'm sur ethe juju-core folks would know
[05:01] <wallyworld__> i'll ask them
[08:37] <TheMue> Morning
[10:15] <mgz> let's not do the juju team meeting at 7:00 GMT next week, as google calendar has today's (cancelled) one down for
[11:35] <jam> mgz, wallyworld_, if you guys want to say hello on mumble, we can, though I don't expect you to be working today yet :)
[11:36] <wallyworld_> jam: i've been working :-) i have a few questions, so let me grab my headphones
[11:42] <jam> wallyworld_: you scared mumble
[12:06]  * TheMue is at lunch
[13:09]  * TheMue is back again.
[18:02] <sidnei> hi folks, any chance that https://bugs.launchpad.net/juju/+bug/1097015 is fixed in juju-core?
[18:02] <_mup_> Bug #1097015: "juju status" slows down non-linearly with new units/services/etc <canonical-webops> <juju:Confirmed> < https://launchpad.net/bugs/1097015 >
[18:04] <mgz> good question, and on that we can hopefully answer for certain shortly when we do some scale testing
[18:06] <sidnei> doesn't need too much of a scale fwiw, it's taking me in excess of 60s to run juju-status with about 12 units, one subordinate on each.
[18:06] <sidnei> with pyjuju still that is
[18:22] <mgz> sidnei: that much should be fixed
[18:23] <sidnei> i guess i should give it a try then
[18:30] <mgz> hmm, annoyingly the review for the fix doesn't include why kapil found it didn't work, and I don't recall
[18:32] <mgz> hazmat: what exactly was borked with lp:~hazmat/juju/big-oh-status-constant again?
[19:53] <sidnei> hazmat: ^?
[19:55] <hazmat> sidnei, openstack?
[19:56] <sidnei> hazmat: yup
[19:56] <hazmat> sidnei, i'll take a look.. long term solution is hiding this behind the api and divorcing provider queries from state queries.. ie answer from state, and behind the api can cache response.
[19:58] <mgz> hazmat: specifically, your branch that aimed to make the O() better for juju status ended up not being effective, but I can't remember why
[19:59] <hazmat> mgz, for openstack, its because the state api is doing queries to resolve constriants
[19:59] <mgz> we still ended up doing network operations per-machine, but I don't remember the specifics
[19:59] <hazmat> mgz, the efficiency came from asking for instances collectivly instead of one by one
[19:59] <thumper> morning folks
[19:59] <hazmat> mgz, constraints afaicr
[19:59]  * thumper is back and only minorly jet-lagged now
[19:59] <thumper> morning hazmat, mgz
[19:59] <hazmat> mgz, the constraints lookup in openstack isn't cached and ends up being quite over done.. for flavors
[19:59] <mgz> we can, and do, cache flavor lookups for constraints, what else do we need?
[20:00] <hazmat> thumper, greetings
[20:00] <mgz> if the cache isn't working we should fix that/
[20:00] <hazmat> mgz, for the openstack query that should be the majority
[20:00] <hazmat> sidnei, can you do the status with -v and pastebinit
[20:00] <hazmat> mgz, the rest is fixed overhead around state (still O(n)  that needs an api to resolve
[20:01] <sidnei> hazmat: sure
[20:01] <mgz> I know we do zookeeper work per-machine, but I'm pretty certain you told me about something else after you retested when we landed that branch
[20:02] <mgz> (and I guess we do mongo work per-machine, so have traded one kind of performance characteristic for another in that regard currently on juju-core)
[20:02] <mgz> hey thumper
[20:02] <mgz> ...man, I wish #juju was logged, or that I didn't suck and had logs myself
[20:03] <mgz> memory not quite as grepable
[20:03] <hazmat> mgz, it is logged
[20:03] <hazmat> irclogs.ubuntu.com
[20:04] <mgz> sidnei: so, if it turns out to be something obvious with constraints, that's fixable
[20:04] <mgz> hazmat: #juju-dev #juju-gui but no #juju
[20:04] <hazmat> hmm
[20:05] <mgz> would have been shortly after 2012-08-09 when it got landed that discussion happened
[20:05] <bac> mgz: just open an RT and IS will log it
[20:06] <bac> no good for retroactive, though.  :)
[20:06] <hazmat> mgz, thats so sad.. there was an rt for it
[20:06] <hazmat> when we switched from ensemble to juju
[20:06] <hazmat> looks like it was never acted upon
[20:06]  * hazmat files one
[20:07] <hazmat> mgz, cc'd you
[20:08] <mgz> ta
[20:09] <hazmat> mgz, incidentally i don't see juju-dev there either
[20:09] <hazmat> the logs that is
[20:11] <mgz> should be if you selected a new enough year
[20:14]  * thumper runs go test on trunk in raring to see where we are
[20:18]  * thumper added a card to kanban to fix the tests
[20:18]  * thumper fires up the vm again
[20:46] <thumper> so...
[20:46] <thumper> what is the damn param to only run a subset of the tests again?
[20:46] <thumper> some -gocheck flag
[20:46]  * thumper doesn't remember
[20:47] <mgz> run `go test -bogusflag` to get flag help
[20:48] <mgz> -gocheck.f takes a regexp of... something
[20:55] <thumper> mgz: thanks
[20:55] <thumper> mgz: we should really write that in to the damn help docs somewhere
[20:55] <thumper> hi mramm
[20:56] <mramm> thumper: hey!
[20:56] <thumper> mramm: hey, just sent you an email about the kanban board
[20:56] <mramm> I'm on my way to pgday at pycon
[20:56] <thumper> I kinda forgot that you were online...
[20:56] <thumper> oh, yeah
[20:56]  * thumper is jealous
[20:56] <thumper> one day I'll get to go to pycon
[20:57]  * thumper preps a talk for next year
[20:57] <mramm> thumper: we will figure out a way to get you there next year
[20:57] <thumper> "using juju to deploy your django app"
[20:57] <mramm> sure
[20:57] <thumper> now we just have to make it awesome
[20:57] <mramm> something like that would be nice
[20:57] <mramm> right
[21:02] <sidnei> hazmat: is this of any help? http://paste.ubuntu.com/5608923/
[21:04] <mgz> sidnei: that took less than 2 seconds
[21:04] <sidnei> mgz: you mean minutes right? :)
[21:05] <mgz> ha
[21:05] <mgz> so, no, we need the provider side log I guess
[21:11] <mgz> 4 seconds to auth and get bootstrap node details, 7 seconds to ssh there and connect to zookeeper, *76* seconds doing stuff remotely, 2 seconds listing all servers with openstack api, 4 seconds formatting and exiting
[21:12] <mramm> thumper: kanban board change looks great, I'm definitely +1 on that!
[21:13] <thumper> mramm: coolio
[21:17] <mramm> looks like I'm about to run out of power here on the plane -- see you all on the other side!
[22:14] <hazmat> mgz, its all client side
[22:15] <hazmat> mgz, that's pure zk overhead from the client
[22:15] <hazmat> sidnei, so this is basically fetch raw data to the client overhead .. this only gets better with the api
[22:16]  * hazmat heads out for a bit
[22:24] <m_3> davecheney: pounce
[22:36] <davecheney> m_3: booja!
[22:36] <m_3> hey man
[22:36] <davecheney> m_3: wazzup ?
[22:37] <m_3> so are you on all day today?
[22:37] <davecheney> how you doing ?
[22:37] <davecheney> imma here all week, try the fish
[22:37] <m_3> good, just munching on the queue
[22:37] <m_3> ha
[22:37] <davecheney> queue ? linkage ?
[22:37] <m_3> ok, so I wanted to do another round of debugging testing after a bit if you're up for it
[22:38] <davecheney> sure
[22:38] <davecheney> you were going to tell me how to find the jenkins instance after it gets torn down and rebuilt
[22:39] <davecheney> m_3: two secs, relocating up stairs
[22:39] <m_3> oh way
[22:39] <m_3> s/way/wait/
[22:39] <m_3> :)
[22:39] <m_3> bout to change locations
[22:40] <m_3> can't really work on it til after food... just wanted to check your schedule and plan accordingly
[22:40] <davecheney> m_3: no probs
[22:40] <davecheney> do you want do a quck voice call (at your leasure) to sync up
[22:40] <m_3> ok, so I'll ping you in an hour or so and we can get rolling on that
[22:40] <davecheney> m_3: kk
[22:40] <m_3> that sounds great
[22:40] <davecheney> works for me
[22:40] <m_3> danke sir
[22:40] <davecheney> thumper: morning sir
[22:41] <thumper> hi davecheney
[22:42] <davecheney> thumper: just checking out your smart bool branch
[22:42] <thumper> davecheney: cool
[22:42] <thumper> it isn't rocket science :)
[22:43] <davecheney> yeah
[22:43] <davecheney> i've never seen the X status on a card before
[22:46] <thumper> blocked :)
[22:46] <thumper> magic communication