[04:00] arosales: hi, did you try it again? [04:03] wallyworld__, I am actually giving it a go :-) but my previous juju destroy-environment didn't exit properly. So bootstraping to that environment is giving me issues. [04:03] wallyworld__, thanks for the instructions, btw [04:03] I could just create a new environment . . . [04:03] np, if you had an error code from the failed destroy? [04:03] just manually delete your bucket etc using hp cloud console [04:04] or try destory again, does it work? [04:06] wallyworld__, http://pastebin.ubuntu.com/5606834/ [04:06] I am not sure if this is a result of a defunct previous environment or not . . . [04:06] * arosales will try with a different environment [04:07] never seen that error before, sounds like provider-state is fooked. try deleting it and the container it lives in manually [04:07] could very well be related to previous env destroy [04:08] wallyworld__, new environment also fails: http://pastebin.ubuntu.com/5606837/ [04:09] I'll delete the manually manually via the hp console. [04:09] looks like control bucket url is wrong [04:10] yeah, just delete everything - compute nodes, control bucket - and try again [04:10] I have public-bucket-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/AUTH_365e2f1e-aea0-44c5-93f4-0fe2eb1f2bcf [04:15] wallyworld__, manually destroyed environment [04:15] redeployed and got http://pastebin.ubuntu.com/5606848/ [04:15] i can't see that public bucket content [04:15] well now it can see your tools etc, so that's a win [04:16] I am actually using your bucket [04:16] um, you may have left over security groups from previous runs [04:16] you can list those and delete [04:16] nova secgroup-list i think [04:17] maybe you can see them from hp cloud console, not sure [04:17] yes, you can [04:18] i had a look with my creds, and there's a few there, maybe 20 [04:18] not sure what the limit is [04:19] did you have many groups created? [04:21] deleted all juju created security groups [04:21] re-bootstraped [04:22] http://pastebin.ubuntu.com/5606856/ [04:22] still getting a resource access error when using your control bucket. . . [04:23] wallyworld__, perhaps I have to have my own control bucket? [04:23] yes definitey [04:23] you need credentials to read/write to it [04:23] but my public bucket can be shared [04:24] oh no worries. [04:24] that was my mistake I thought it was public. I should have checked via the browser first. [04:24] your control bucket url looks wrong though [04:24] it seems to be missing a bit [04:24] I was using the one you gave me via email. [04:24] maybe you can paste your environments.yaml fie [04:25] i didn't give you a control bucket url [04:25] the control bucket url is generated internaly [04:25] you got a public bucket url from me [04:26] sorry I thought step 3 was a public-bucket-url to try [04:26] let me check my email [04:26] yes, step is is public bucket url [04:26] but you said control bucket above [04:26] i think [04:27] in any case, the logged messages seem to show an incorrect control bucket url [04:27] which seems strange [04:27] if you paste your environmnts.yaml file, i can see if it looks ok (remove any passwords if they are in there) [04:27] wallyworld__, does that mean public control bucket's can't be shared? [04:28] there's no such thing as a public control bucket [04:28] there's a public bucket url which can be shared [04:28] and a control bucket which is private to your env [04:28] ah [04:28] the control bucket url is generated internally from the bucket name in config [04:29] the public bucket url is gotten from the published url of a public bucket [04:30] so you create a public bucket (ie container) using the cloud console or equivalent [04:30] and then put the tools there [04:30] and then give people the url [04:30] for the control bucket, that's created by juju [04:31] it uses the control bucket name in config and creates it under yout own credentials [04:31] ie private to you [04:44] wallyworld__, finally success http://15.185.118.228/wp-admin/install.php [04:44] \o/ [04:44] so you used my public bucket [04:44] and your own control bucket name [04:45] sorry about the misstep on the confusing the public-bucket-url with the control-bucket [04:45] wallyworld__, correct [04:45] no problem at all, it can be and is confusing [04:45] i'm really glad it worked [04:45] wallyworld__, one last question [04:45] sure [04:46] I was going to send to the list on getting set up on hpcloud [04:46] do you have a preference on using your public bucket? [04:46] that's the $64000000 question [04:47] i set up my bucket using a shared account [04:47] I can also create one as we verified the steps here. [04:47] i'm not sure what our policies around all this are [04:47] the credentials i use were provided by someone else in canonical [04:47] ah [04:48] I'll go ahead and create a public bucket off my account (not shared) [04:48] we sort of need a true public bucket, but who pays for it if you know what i mean [04:48] ya there is the cost part [04:48] I am fine with incurring that cost for now for any users that want to try out hpcloud [04:49] is that something canonical would just do for the long term so people can use juju on hp cloud? [04:49] wallyworld__, many thanks [04:49] np, really pleased you got it working [04:49] for private clouds, the issue goes away [04:49] wallyworld__, possibly I'll need to confirm longterm logistics. [04:50] ok, thanks. if you find out something, can you let us know (us = blue squad and juju-core folks etc) [04:51] wallyworld__, will do. I'll need to investigate a bit, but I will definitely let you know. [04:52] excellent, thanks. it been an open question for us, but till now, only of theoretical value since it wasn't working yet :-) [04:58] wallyworld__, I guess this is treated a little different in aws? [04:59] ok ec2, there's a public bucket http://juju-dist.s3.amazonaws.com/ [04:59] someone must pay for that i think? [05:00] it has been there for ages, so i'm not sure who/how it was set up [05:00] wallyworld__, seems logically , but I am not sure what account it comes out of :-) [05:00] me either [05:00] maybe i was told at one point but cannot recall now [05:00] wallyworld__, I'll check with a few folks to see if I can find that info. Be good to know [05:01] yeah, i'm sur ethe juju-core folks would know [05:01] i'll ask them [08:37] Morning [10:15] let's not do the juju team meeting at 7:00 GMT next week, as google calendar has today's (cancelled) one down for [11:35] mgz, wallyworld_, if you guys want to say hello on mumble, we can, though I don't expect you to be working today yet :) [11:36] jam: i've been working :-) i have a few questions, so let me grab my headphones [11:42] wallyworld_: you scared mumble === mthaddon` is now known as mthaddon [12:06] * TheMue is at lunch [13:09] * TheMue is back again. === alexlist` is now known as alexlist === deryck_ is now known as deryck === teknico_ is now known as teknico === deryck is now known as deryck[lunch] === BradCrittenden is now known as bac [18:02] hi folks, any chance that https://bugs.launchpad.net/juju/+bug/1097015 is fixed in juju-core? [18:02] <_mup_> Bug #1097015: "juju status" slows down non-linearly with new units/services/etc < https://launchpad.net/bugs/1097015 > [18:04] good question, and on that we can hopefully answer for certain shortly when we do some scale testing [18:06] doesn't need too much of a scale fwiw, it's taking me in excess of 60s to run juju-status with about 12 units, one subordinate on each. [18:06] with pyjuju still that is [18:22] sidnei: that much should be fixed [18:23] i guess i should give it a try then [18:30] hmm, annoyingly the review for the fix doesn't include why kapil found it didn't work, and I don't recall [18:32] hazmat: what exactly was borked with lp:~hazmat/juju/big-oh-status-constant again? === deryck[lunch] is now known as deryck [19:53] hazmat: ^? [19:55] sidnei, openstack? [19:56] hazmat: yup [19:56] sidnei, i'll take a look.. long term solution is hiding this behind the api and divorcing provider queries from state queries.. ie answer from state, and behind the api can cache response. [19:58] hazmat: specifically, your branch that aimed to make the O() better for juju status ended up not being effective, but I can't remember why [19:59] mgz, for openstack, its because the state api is doing queries to resolve constriants [19:59] we still ended up doing network operations per-machine, but I don't remember the specifics [19:59] mgz, the efficiency came from asking for instances collectivly instead of one by one [19:59] morning folks [19:59] mgz, constraints afaicr [19:59] * thumper is back and only minorly jet-lagged now [19:59] morning hazmat, mgz [19:59] mgz, the constraints lookup in openstack isn't cached and ends up being quite over done.. for flavors [19:59] we can, and do, cache flavor lookups for constraints, what else do we need? [20:00] thumper, greetings [20:00] if the cache isn't working we should fix that/ [20:00] mgz, for the openstack query that should be the majority [20:00] sidnei, can you do the status with -v and pastebinit [20:00] mgz, the rest is fixed overhead around state (still O(n) that needs an api to resolve [20:01] hazmat: sure [20:01] I know we do zookeeper work per-machine, but I'm pretty certain you told me about something else after you retested when we landed that branch [20:02] (and I guess we do mongo work per-machine, so have traded one kind of performance characteristic for another in that regard currently on juju-core) [20:02] hey thumper [20:02] ...man, I wish #juju was logged, or that I didn't suck and had logs myself [20:03] memory not quite as grepable [20:03] mgz, it is logged [20:03] irclogs.ubuntu.com [20:04] sidnei: so, if it turns out to be something obvious with constraints, that's fixable [20:04] hazmat: #juju-dev #juju-gui but no #juju [20:04] hmm [20:05] would have been shortly after 2012-08-09 when it got landed that discussion happened [20:05] mgz: just open an RT and IS will log it [20:06] no good for retroactive, though. :) [20:06] mgz, thats so sad.. there was an rt for it [20:06] when we switched from ensemble to juju [20:06] looks like it was never acted upon [20:06] * hazmat files one [20:07] mgz, cc'd you [20:08] ta [20:09] mgz, incidentally i don't see juju-dev there either [20:09] the logs that is [20:11] should be if you selected a new enough year [20:14] * thumper runs go test on trunk in raring to see where we are [20:18] * thumper added a card to kanban to fix the tests [20:18] * thumper fires up the vm again [20:46] so... [20:46] what is the damn param to only run a subset of the tests again? [20:46] some -gocheck flag [20:46] * thumper doesn't remember [20:47] run `go test -bogusflag` to get flag help [20:48] -gocheck.f takes a regexp of... something [20:55] mgz: thanks [20:55] mgz: we should really write that in to the damn help docs somewhere [20:55] hi mramm [20:56] thumper: hey! [20:56] mramm: hey, just sent you an email about the kanban board [20:56] I'm on my way to pgday at pycon [20:56] I kinda forgot that you were online... [20:56] oh, yeah [20:56] * thumper is jealous [20:56] one day I'll get to go to pycon [20:57] * thumper preps a talk for next year [20:57] thumper: we will figure out a way to get you there next year [20:57] "using juju to deploy your django app" [20:57] sure [20:57] now we just have to make it awesome [20:57] something like that would be nice [20:57] right [21:02] hazmat: is this of any help? http://paste.ubuntu.com/5608923/ [21:04] sidnei: that took less than 2 seconds [21:04] mgz: you mean minutes right? :) [21:05] ha [21:05] so, no, we need the provider side log I guess [21:11] 4 seconds to auth and get bootstrap node details, 7 seconds to ssh there and connect to zookeeper, *76* seconds doing stuff remotely, 2 seconds listing all servers with openstack api, 4 seconds formatting and exiting [21:12] thumper: kanban board change looks great, I'm definitely +1 on that! [21:13] mramm: coolio [21:17] looks like I'm about to run out of power here on the plane -- see you all on the other side! [22:14] mgz, its all client side [22:15] mgz, that's pure zk overhead from the client [22:15] sidnei, so this is basically fetch raw data to the client overhead .. this only gets better with the api [22:16] * hazmat heads out for a bit [22:24] davecheney: pounce [22:36] m_3: booja! [22:36] hey man [22:36] m_3: wazzup ? [22:37] so are you on all day today? [22:37] how you doing ? [22:37] imma here all week, try the fish [22:37] good, just munching on the queue [22:37] ha [22:37] queue ? linkage ? [22:37] ok, so I wanted to do another round of debugging testing after a bit if you're up for it [22:38] sure [22:38] you were going to tell me how to find the jenkins instance after it gets torn down and rebuilt [22:39] m_3: two secs, relocating up stairs [22:39] oh way [22:39] s/way/wait/ [22:39] :) [22:39] bout to change locations [22:40] can't really work on it til after food... just wanted to check your schedule and plan accordingly [22:40] m_3: no probs [22:40] do you want do a quck voice call (at your leasure) to sync up [22:40] ok, so I'll ping you in an hour or so and we can get rolling on that [22:40] m_3: kk [22:40] that sounds great [22:40] works for me [22:40] danke sir [22:40] thumper: morning sir [22:41] hi davecheney [22:42] thumper: just checking out your smart bool branch [22:42] davecheney: cool [22:42] it isn't rocket science :) [22:43] yeah [22:43] i've never seen the X status on a card before [22:46] blocked :) [22:46] magic communication