/srv/irclogs.ubuntu.com/2013/03/12/#juju-dev.txt

wallyworld__arosales: hi, did you try it again?04:00
arosaleswallyworld__, I am actually giving it a go :-)  but my previous juju destroy-environment didn't exit properly. So bootstraping to that environment is giving me issues.04:03
arosaleswallyworld__, thanks for the instructions, btw04:03
arosalesI could just create a new environment . . .04:03
wallyworld__np, if you had an error code from the failed destroy?04:03
wallyworld__just manually delete your bucket etc using hp cloud console04:03
wallyworld__or try destory again, does it work?04:04
arosaleswallyworld__, http://pastebin.ubuntu.com/5606834/04:06
arosalesI am not sure if this is a result of a defunct previous environment or not . . .04:06
* arosales will try with a different environment 04:06
wallyworld__never seen that error before, sounds like provider-state is fooked. try deleting it and the container it lives in manually04:07
wallyworld__could very well be related to previous env destroy04:07
arosaleswallyworld__, new environment also fails: http://pastebin.ubuntu.com/5606837/04:08
arosalesI'll delete the manually manually via the hp console.04:09
wallyworld__looks like control bucket url is wrong04:09
wallyworld__yeah, just delete everything - compute nodes, control bucket - and try again04:10
arosalesI have     public-bucket-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/AUTH_365e2f1e-aea0-44c5-93f4-0fe2eb1f2bcf04:10
arosaleswallyworld__, manually destroyed environment04:15
arosalesredeployed and got http://pastebin.ubuntu.com/5606848/04:15
wallyworld__i can't see that public bucket content04:15
wallyworld__well now it can see your tools etc, so that's a win04:15
arosalesI am actually using your bucket04:16
wallyworld__um, you may have left over security groups from previous runs04:16
wallyworld__you can list those and delete04:16
wallyworld__nova secgroup-list i think04:16
wallyworld__maybe you can see them from hp cloud console, not sure04:17
wallyworld__yes, you can04:17
wallyworld__i had a look with my creds, and there's a few there, maybe 2004:18
wallyworld__not sure what the limit is04:18
wallyworld__did you have many groups created?04:19
arosalesdeleted all juju created security groups04:21
arosalesre-bootstraped04:21
arosaleshttp://pastebin.ubuntu.com/5606856/04:22
arosalesstill getting a resource access error when using your control bucket. .  .04:22
arosaleswallyworld__, perhaps I have to have my own control bucket?04:23
wallyworld__yes definitey04:23
wallyworld__you need credentials to read/write to it04:23
wallyworld__but my public bucket can be shared04:23
arosalesoh no worries.04:24
arosalesthat was my mistake I thought it was public.  I should have checked via the browser first.04:24
wallyworld__your control bucket url looks wrong though04:24
wallyworld__it seems to be missing a bit04:24
arosalesI was using the one you gave me via email.04:24
wallyworld__maybe you can paste your environments.yaml fie04:24
wallyworld__i didn't give you a control bucket url04:25
wallyworld__the control bucket url is generated internaly04:25
wallyworld__you got a public bucket url from me04:25
arosalessorry I thought step 3 was a public-bucket-url to try04:26
wallyworld__let me check my email04:26
wallyworld__yes, step is is public bucket url04:26
wallyworld__but you said control bucket above04:26
wallyworld__i think04:26
wallyworld__in any case, the logged messages seem to show an incorrect control bucket url04:27
wallyworld__which seems strange04:27
wallyworld__if you paste your environmnts.yaml file, i can see if it looks ok (remove any passwords if they are in there)04:27
arosaleswallyworld__, does that mean public control bucket's can't be shared?04:27
wallyworld__there's no such thing as a public control bucket04:28
wallyworld__there's a public bucket url which can be shared04:28
wallyworld__and a control bucket which is private to your env04:28
arosalesah04:28
wallyworld__the control bucket url is generated internally from the bucket name in config04:28
wallyworld__the public bucket url is gotten from the published url of a public bucket04:29
wallyworld__so you create a public bucket (ie container) using the cloud console or equivalent04:30
wallyworld__and then put the tools there04:30
wallyworld__and then give people the url04:30
wallyworld__for the control bucket, that's created by juju04:30
wallyworld__it uses the control bucket name in config and creates it under yout own credentials04:31
wallyworld__ie private to you04:31
arosaleswallyworld__, finally success http://15.185.118.228/wp-admin/install.php04:44
wallyworld__\o/04:44
wallyworld__so you used my public bucket04:44
wallyworld__and your own control bucket name04:44
arosalessorry about the misstep on the confusing the public-bucket-url with the control-bucket04:45
arosaleswallyworld__, correct04:45
wallyworld__no problem at all, it can be and is confusing04:45
wallyworld__i'm really glad it worked04:45
arosaleswallyworld__, one last question04:45
wallyworld__sure04:45
arosalesI was going to send to the list on getting set up on hpcloud04:46
arosalesdo you have a preference on using your public bucket?04:46
wallyworld__that's the $64000000 question04:46
wallyworld__i set up my bucket using a shared account04:47
arosalesI can also create one as we verified the steps here.04:47
wallyworld__i'm not sure what our policies around all this are04:47
wallyworld__the credentials i use were provided by someone else in canonical04:47
arosalesah04:47
arosalesI'll go ahead and create a public bucket off my account (not shared)04:48
wallyworld__we sort of need a true public bucket, but who pays for it if you know what i mean04:48
arosalesya there is the cost part04:48
arosalesI am fine with incurring that cost for now for any users that want to try out hpcloud04:48
wallyworld__is that something canonical would just do for the long term so people can use juju on hp cloud?04:49
arosaleswallyworld__, many thanks04:49
wallyworld__np, really pleased you got it working04:49
wallyworld__for private clouds, the issue goes away04:49
arosaleswallyworld__, possibly I'll need to confirm longterm logistics.04:49
wallyworld__ok, thanks. if you find out something, can you let us know (us = blue squad and juju-core folks etc)04:50
arosaleswallyworld__, will do. I'll need to investigate a bit, but I will definitely let you know.04:51
wallyworld__excellent, thanks. it been an open question for us, but till now, only of theoretical value since it wasn't working yet :-)04:52
arosaleswallyworld__, I guess this is treated a little different in aws?04:58
wallyworld__ok ec2, there's a public bucket http://juju-dist.s3.amazonaws.com/04:59
wallyworld__someone must  pay for that i think?04:59
wallyworld__it has been there for ages, so i'm not sure who/how it was set up05:00
arosaleswallyworld__, seems logically , but I am not sure what account it comes out of :-)05:00
wallyworld__me either05:00
wallyworld__maybe i was told at one point but cannot recall now05:00
arosaleswallyworld__, I'll check with a few folks to see if I can find that info. Be good to know05:00
wallyworld__yeah, i'm sur ethe juju-core folks would know05:01
wallyworld__i'll ask them05:01
TheMueMorning08:37
mgzlet's not do the juju team meeting at 7:00 GMT next week, as google calendar has today's (cancelled) one down for10:15
jammgz, wallyworld_, if you guys want to say hello on mumble, we can, though I don't expect you to be working today yet :)11:35
wallyworld_jam: i've been working :-) i have a few questions, so let me grab my headphones11:36
jamwallyworld_: you scared mumble11:42
=== mthaddon` is now known as mthaddon
* TheMue is at lunch12:06
* TheMue is back again.13:09
=== alexlist` is now known as alexlist
=== deryck_ is now known as deryck
=== teknico_ is now known as teknico
=== deryck is now known as deryck[lunch]
=== BradCrittenden is now known as bac
sidneihi folks, any chance that https://bugs.launchpad.net/juju/+bug/1097015 is fixed in juju-core?18:02
_mup_Bug #1097015: "juju status" slows down non-linearly with new units/services/etc <canonical-webops> <juju:Confirmed> < https://launchpad.net/bugs/1097015 >18:02
mgzgood question, and on that we can hopefully answer for certain shortly when we do some scale testing18:04
sidneidoesn't need too much of a scale fwiw, it's taking me in excess of 60s to run juju-status with about 12 units, one subordinate on each.18:06
sidneiwith pyjuju still that is18:06
mgzsidnei: that much should be fixed18:22
sidneii guess i should give it a try then18:23
mgzhmm, annoyingly the review for the fix doesn't include why kapil found it didn't work, and I don't recall18:30
mgzhazmat: what exactly was borked with lp:~hazmat/juju/big-oh-status-constant again?18:32
=== deryck[lunch] is now known as deryck
sidneihazmat: ^?19:53
hazmatsidnei, openstack?19:55
sidneihazmat: yup19:56
hazmatsidnei, i'll take a look.. long term solution is hiding this behind the api and divorcing provider queries from state queries.. ie answer from state, and behind the api can cache response.19:56
mgzhazmat: specifically, your branch that aimed to make the O() better for juju status ended up not being effective, but I can't remember why19:58
hazmatmgz, for openstack, its because the state api is doing queries to resolve constriants19:59
mgzwe still ended up doing network operations per-machine, but I don't remember the specifics19:59
hazmatmgz, the efficiency came from asking for instances collectivly instead of one by one19:59
thumpermorning folks19:59
hazmatmgz, constraints afaicr19:59
* thumper is back and only minorly jet-lagged now19:59
thumpermorning hazmat, mgz19:59
hazmatmgz, the constraints lookup in openstack isn't cached and ends up being quite over done.. for flavors19:59
mgzwe can, and do, cache flavor lookups for constraints, what else do we need?19:59
hazmatthumper, greetings20:00
mgzif the cache isn't working we should fix that/20:00
hazmatmgz, for the openstack query that should be the majority20:00
hazmatsidnei, can you do the status with -v and pastebinit20:00
hazmatmgz, the rest is fixed overhead around state (still O(n)  that needs an api to resolve20:00
sidneihazmat: sure20:01
mgzI know we do zookeeper work per-machine, but I'm pretty certain you told me about something else after you retested when we landed that branch20:01
mgz(and I guess we do mongo work per-machine, so have traded one kind of performance characteristic for another in that regard currently on juju-core)20:02
mgzhey thumper20:02
mgz...man, I wish #juju was logged, or that I didn't suck and had logs myself20:02
mgzmemory not quite as grepable20:03
hazmatmgz, it is logged20:03
hazmatirclogs.ubuntu.com20:03
mgzsidnei: so, if it turns out to be something obvious with constraints, that's fixable20:04
mgzhazmat: #juju-dev #juju-gui but no #juju20:04
hazmathmm20:04
mgzwould have been shortly after 2012-08-09 when it got landed that discussion happened20:05
bacmgz: just open an RT and IS will log it20:05
bacno good for retroactive, though.  :)20:06
hazmatmgz, thats so sad.. there was an rt for it20:06
hazmatwhen we switched from ensemble to juju20:06
hazmatlooks like it was never acted upon20:06
* hazmat files one20:06
hazmatmgz, cc'd you20:07
mgzta20:08
hazmatmgz, incidentally i don't see juju-dev there either20:09
hazmatthe logs that is20:09
mgzshould be if you selected a new enough year20:11
* thumper runs go test on trunk in raring to see where we are20:14
* thumper added a card to kanban to fix the tests20:18
* thumper fires up the vm again20:18
thumperso...20:46
thumperwhat is the damn param to only run a subset of the tests again?20:46
thumpersome -gocheck flag20:46
* thumper doesn't remember20:46
mgzrun `go test -bogusflag` to get flag help20:47
mgz-gocheck.f takes a regexp of... something20:48
thumpermgz: thanks20:55
thumpermgz: we should really write that in to the damn help docs somewhere20:55
thumperhi mramm20:55
mrammthumper: hey!20:56
thumpermramm: hey, just sent you an email about the kanban board20:56
mrammI'm on my way to pgday at pycon20:56
thumperI kinda forgot that you were online...20:56
thumperoh, yeah20:56
* thumper is jealous20:56
thumperone day I'll get to go to pycon20:56
* thumper preps a talk for next year20:57
mrammthumper: we will figure out a way to get you there next year20:57
thumper"using juju to deploy your django app"20:57
mrammsure20:57
thumpernow we just have to make it awesome20:57
mrammsomething like that would be nice20:57
mrammright20:57
sidneihazmat: is this of any help? http://paste.ubuntu.com/5608923/21:02
mgzsidnei: that took less than 2 seconds21:04
sidneimgz: you mean minutes right? :)21:04
mgzha21:05
mgzso, no, we need the provider side log I guess21:05
mgz4 seconds to auth and get bootstrap node details, 7 seconds to ssh there and connect to zookeeper, *76* seconds doing stuff remotely, 2 seconds listing all servers with openstack api, 4 seconds formatting and exiting21:11
mrammthumper: kanban board change looks great, I'm definitely +1 on that!21:12
thumpermramm: coolio21:13
mrammlooks like I'm about to run out of power here on the plane -- see you all on the other side!21:17
hazmatmgz, its all client side22:14
hazmatmgz, that's pure zk overhead from the client22:15
hazmatsidnei, so this is basically fetch raw data to the client overhead .. this only gets better with the api22:15
* hazmat heads out for a bit22:16
m_3davecheney: pounce22:24
davecheneym_3: booja!22:36
m_3hey man22:36
davecheneym_3: wazzup ?22:36
m_3so are you on all day today?22:37
davecheneyhow you doing ?22:37
davecheneyimma here all week, try the fish22:37
m_3good, just munching on the queue22:37
m_3ha22:37
davecheneyqueue ? linkage ?22:37
m_3ok, so I wanted to do another round of debugging testing after a bit if you're up for it22:37
davecheneysure22:38
davecheneyyou were going to tell me how to find the jenkins instance after it gets torn down and rebuilt22:38
davecheneym_3: two secs, relocating up stairs22:39
m_3oh way22:39
m_3s/way/wait/22:39
m_3:)22:39
m_3bout to change locations22:39
m_3can't really work on it til after food... just wanted to check your schedule and plan accordingly22:40
davecheneym_3: no probs22:40
davecheneydo you want do a quck voice call (at your leasure) to sync up22:40
m_3ok, so I'll ping you in an hour or so and we can get rolling on that22:40
davecheneym_3: kk22:40
m_3that sounds great22:40
davecheneyworks for me22:40
m_3danke sir22:40
davecheneythumper: morning sir22:40
thumperhi davecheney22:41
davecheneythumper: just checking out your smart bool branch22:42
thumperdavecheney: cool22:42
thumperit isn't rocket science :)22:42
davecheneyyeah22:43
davecheneyi've never seen the X status on a card before22:43
thumperblocked :)22:46
thumpermagic communication22:46

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!