/srv/irclogs.ubuntu.com/2011/11/28/#juju.txt

rogmornin'08:46
TheMuehi rog09:16
niemeyerHello all!11:13
mainerrorHello.11:14
rogniemeyer: welcome home!11:20
niemeyerrog: Thanks! :)11:24
TheMueniemeyer: welcome back11:27
niemeyerTheMue: Thanks!11:27
niemeyerrog: lbox seems to be working well11:49
niemeyerrog: You're right about pre-req.. the diff should come from the pre-req branch11:49
niemeyerrog: Will address this later11:49
rogniemeyer: it's difficult though, because the prereq branch is a moving target. as is the target itself, i guess11:50
niemeyerrog: Yeah, that's probably not a big deal11:50
rogniemeyer: it's a pity the target has to be on disk.11:50
rogniemeyer: i've made a shell script that does quite a lot of the stuff i always need to do; i don't know how much would be generally applicable though.11:51
niemeyerrog: The main concern I have is that it feels easy to merge a branch without addressing the pre-req first, and as a consequence merging the pre-req without reviewing it11:51
niemeyerrog: I was going to address that in your email, but we can talk here as well11:51
roghttp://paste.ubuntu.com/752390/11:51
niemeyerrog: It's actually not a pity11:51
niemeyerrog: This is standard distributed revision controlling11:51
rogi just fetch the target every time now11:52
roginto /tmp or somewhere11:52
niemeyerrog: You're used to the Go process, but their process is well sub-standard in that regard11:52
niemeyerrog: This feels quite bad11:52
rogyeah, but it stopped me making the same mistake every time11:52
niemeyerrog: The usual way to develop software with any of the DVCS tools (bzr, git, hg, ..)11:53
rogniemeyer: which was that i'd specify, say, ../go-trunk as a target, but i might have a different push target inside go-trunk11:53
niemeyerrog: is to fetch the pristine code locally, and work with branches on top of it11:53
niemeyerrog: Hmmm.. how do you mean?11:54
niemeyerrog: There's only one go-trunk?11:54
rogi think i'd probably done: cd go-trunk; make-some-changes; push --remember lp:~rogpeppe/blah/foo11:55
niemeyerrog: Yeah, that's the issue11:55
rogwhich was probably a silly thing to do, but it caught me out twiece11:55
rogs/twie/twi11:55
niemeyerrog: yeah, don't do it.. you may end up even screwing up the real trunk by mistake11:55
niemeyerrog: branch locally11:55
rogthe other thing is when i want to edit some code that i've goinstalled11:56
rogi want to do: cd $GOROOT/src/pkg/launchpad.net/goamz/ec2; edit; commit; propose11:56
niemeyerrog: We covered that at UDS11:57
niemeyerrog: Don't edit the pristine version11:57
niemeyerrog: Work in a real branch11:57
rogyeah, it's just much easier to get things wrong in that situation, because i'll be using two packages pushing to the same location in GOROOT11:59
rogand i have to remember where i put the darn branch11:59
niemeyerrog: Not really, that's the idea12:00
rogalso (and a different issue i think), i can't specify $GOROOT/src/pkg/launchpad.net/goamz/ec2 as a target12:00
niemeyerrog: Use different GOPATHs12:00
rogi can't use GOPATH because gotest doesn't work12:01
rogi'm still biting my lip on that one12:01
rogi really want to use GOPATH!12:01
niemeyerrog: The two things are unrelated12:01
niemeyerrog: Sorry, let me rephrase that12:01
niemeyerrog: You don't have to keep your source code at $GOROOT12:01
niemeyerrog: That's the real point12:01
niemeyerrog: You can have as many branches as you want installing onto GOROOT, or even a different GOPATH for that matter12:02
rogniemeyer: i think i do if i want to be able to use gotest on the source code12:02
rogniemeyer: unless it's been fixed recently12:02
niemeyerrog: I use that scheme, and I use gotest12:02
niemeyerrog: Nope, no recent changes to that12:02
niemeyerrog: make install always installs to $GOROOT12:03
rogniemeyer: hmm, so if i've got GOPATH set and i go into a source directory in GOPATH and gotest, it works?12:03
niemeyerrog: gotest doesn't use GOPATH.. you can use gotest locally irrespective of where your source lives12:03
rogbut i also like to be able to use goinstall12:04
niemeyerrog: Yeah, I like ponies too :)12:05
rogbecause it means that when i upgrade Go, i don't have to manually go through running make on every package i might have forked12:05
roggoinstall -a is really useful12:05
rog(and has saved me lots of time)12:05
roganyway, my current scheme works ok - i keep directory $HOME/tmp/targets with a set of pristine targets, then before proposing, i update the target and use that12:07
rogthat means that i don't have to worry about whether i might have accidentally locally corrupted a target12:08
rogniemeyer: here's an idea: in the file with the summary & description that's edited as part of the propose process, why not put a summary of all the details that propose has inferred?12:12
rogthen it's more obvious when you've got the wrong push target, missed the prereq, etc12:12
niemeyerrog: We can do that, but it doesn't solve the main problem, which is the danger of merging a branch that has a pre-req without addressing the pre-req first12:13
niemeyerrog: We can probably create some convention to avoid the problem.. I'll think about it some more12:13
niemeyerrog: I'll also tweak lbox to not force the description12:17
rogrog: it would also be nice if you didn't have to give the target when re-running lbox propose. maybe "lbox update" would be a more appropriate name.12:18
niemeyerrog: The name is fine (you're still proposing), and in the setup I suggested you don't actually have to provide a target at all12:22
niemeyerrog: Because it's the parent branch12:22
rogniemeyer: hmm. what it's not the parent branch?12:22
niemeyerrog: Sorry, I don't get the question12:22
rogniemeyer: what if you're not pushing to the parent branch?12:23
rogniemeyer: that is, if you've branched from another local branch, but want to use the original parent as target?12:23
rogniemeyer: or perhaps i've misunderstood12:24
niemeyerrog: I see12:24
niemeyerrog: No, you got it12:24
niemeyerrog: Hmm.. this might help solve both issues perhaps..12:25
niemeyerrog: I'm wondering how the workflow would look like if it _was_ the actual target12:26
rogniemeyer: if what was the target?12:26
rogthe prereq?12:26
niemeyerrog: The awkwardness is that it'd prevent the base from being merged12:26
niemeyerrog: Right12:26
niemeyerrog: Since we'd want to merge the follow up on it12:27
niemeyerProbably a bad idea12:27
rogyeah, it doesn't sound quite right12:27
rogthe other issue i came up against is that you can't have two prereqs12:28
niemeyerrog: Indeed, I've missed that before too12:31
niemeyerrog: That said, at some point it's easier to just hold off the branch a bit12:31
niemeyerrog: Even for everyone's sanity12:31
rogniemeyer: maybe i should have just done that, and push each branch when you've LGTM'd the previous one12:33
rogBTW when i did lbox propose to upload a change to goyaml-error-fixes, i got this: http://paste.ubuntu.com/752421/12:34
rogi wonder why it didn't find the previous merge proposal12:34
niemeyerrog: This would work, but it's a nice feature to be able to push dependent branches..12:34
niemeyerrog: Maybe we can, by convention, not LGTM a branch that has pre-reqs before the pre-reqs themselves have been sorted12:35
niemeyerrog: This would solve the worries12:35
rogniemeyer: a related possibility is to use an lbox command to do the final push12:35
rogniemeyer: and that could check that the prereq had been pushed before doing it12:36
niemeyerrog: It found the branch itself as the landing target12:36
niemeyerrog: Can you please paste "bzr info" there?12:36
rogyeah, i just worked that out12:36
rogi know why it happened12:37
niemeyerrog: It sounds like your custom workflow is getting in the way there12:37
rogit's because i'd deleted my local copy of the branch12:37
rogthinking i could always re-get it from lp later12:37
rogbut by getting it again, i lost the original parent12:37
roghttp://paste.ubuntu.com/752425/12:38
niemeyerrog: Yeah, that's it12:38
niemeyerCool12:38
rogniemeyer: i don't think it was because of my custom workflow12:38
niemeyerrog: It was..12:39
rogniemeyer: because deleting a local copy of the branch isn't customary?12:39
niemeyerrog: In traditional DVCS workflows you don't really kill the local branch while you're working on it12:39
rogniemeyer: i actually didn't delete it - i carried on editing and committing towards a later merge request12:40
niemeyer<rog> it's because i'd deleted my local copy of the branch12:40
niemeyer<rog> thinking i could always re-get it from lp later12:40
niemeyer??12:40
rogniemeyer: well, in my head it was deleted...12:40
niemeyerrog: In Bazaar's head too, apparently, since you got the branch again from Launchpad (the parent says so)12:41
rogniemeyer: it had turned into a new branch, which i didn't want to interfere with12:41
rogniemeyer: maybe there was another way i could have got it from launchpad so that the original parent was preserved12:41
rogniemeyer: i didn't know that parentage was so important12:41
niemeyerrog: Just don't kill the branch while you're working on it12:41
niemeyerrog: The branch existence is important12:42
rogniemeyer: yeah, i should just re-branch every time i do a merge request12:42
rogniemeyer: rather than carrying on working in the same directory12:42
niemeyerrog: Yeah, every time you want to start a new line of development, re-branch12:42
rogniemeyer: it's a bit of a pain because i have to lose all my editing state.12:42
niemeyerrog: FWIW, that's a normal impedance mismatch when getting into DVCS12:43
niemeyerrog: That bit of it is due to Bazaar's way of working with multiple directories12:43
niemeyerrog: This is going to be addressed soon12:43
niemeyerrog: With a git-like workflow, you can have multiple branches in the same directory12:43
rogniemeyer: of course. other VCSs you'd still be in the same dir12:43
niemeyerrog: bzr is getting the same feature12:44
rogniemeyer: i think you *can* do that now - someone pointed me at a way of doing it12:44
rogniemeyer: but it's probably a hack12:44
niemeyerrog: http://doc.bazaar.canonical.com/developers/colocated-branches.html12:44
rogniemeyer: anyway, regardless of glitches, it's awesome that we've got proper code review working! nice one.12:47
niemeyerrog: Totally, I'm very happy about that12:47
rogniemeyer: one thing: i think lbox propose should complain if there are uncommitted changes. i often forget to commit!12:49
rogniemeyer: (just like bzr push complains)12:49
niemeyerrog: Agreed12:50
niemeyerrog: I was already planning something like that.. will tweak the Delta interface on goetveld12:51
rogniemeyer: BTW, i tried pushing to lp:goyaml and got "Transport operation not possible: readonly transport". is that because i'm not yet recognised as a member?12:52
rogniemeyer: or do i have to re-branch, now that i am?12:52
niemeyerrog: No, just because of the URL12:52
rogniemeyer: is it the wrong url?12:53
niemeyerrog: If you look at bzr info, you'll see the URL is a read-only one12:53
niemeyerrog: You can push explicitly to lp:~gophers/goyaml/trunk12:53
rogwhere in the info (http://paste.ubuntu.com/752435/) does it say read-only?12:54
rogah, you mean the url for lp:goyaml?12:54
rogso is lp:~gophers/goyaml/trunk an alias for lp:goyaml ?12:54
rogniemeyer: or is there something else subtle going on here?12:55
niemeyerrog: It is now, but it wasn't at the time you branched12:55
niemeyerrog: Oh, hold on12:55
niemeyerrog: It's my fault, actually12:55
niemeyerrog: I've changed the project maintainer, but not the branch12:56
niemeyerrog: It's still pointing to my personal branch12:56
rogniemeyer: ah, so bzr push lp:goyaml should work, in fact?12:56
niemeyerrog: So, let's do this.. push it to the ~gophers URL.. I'll tweak the official branch location after that12:56
niemeyerrog: It will, definitely12:56
rogcool12:56
rogniemeyer: pushed now12:57
niemeyerrog: Done, we have a new lp:goyaml12:57
rogniemeyer: cool.12:58
niemeyerrog: You're right, btw, we need an "lbox merge"12:59
niemeyerfwereade: Hey!13:00
fwereadeheya niemeyer!13:00
niemeyerrog: Or "lbox submit" perhaps.. :)13:00
fwereadegood holiday?13:00
niemeyerfwereade: Yeah, awesome13:00
fwereadeniemeyer, cool, where did you go?13:00
rogfwereade: afternoon guvnor13:00
fwereadeheya rog13:00
niemeyerfwereade: I went to João Pessoa, a very nice region in the northeast of Brazil13:01
hazmatg'morning13:01
niemeyerfwereade: I went to João Pessoa, a very nice region in the northeast of Brazil13:01
niemeyerhazmat: morning!13:01
fwereadeheya hazmat :)13:01
hazmatniemeyer, welcome back, sounds like a nice vacation13:01
niemeyerhazmat: Thanks! Yeah, it was very relaxing13:01
niemeyerLeft the laptop at home for a change13:02
hazmatniemeyer, ah a disconnected holiday, even better.. pics of those clear blue seas from pessoa look amazing13:03
niemeyerhazmat: Not entirely disconnected.. still had a phone with me.. but at least severely restricted, let's say ;)13:03
niemeyerhazmat: Yeah, in the last day we went snorkeling here: http://perlbal.hi-pi.com/blog-images/536061/gd/1264122522/Picaozinho-Joao-Pessoa.jpg13:04
rogniemeyer: another occasion it didn't seem to find the original merge proposal: http://paste.ubuntu.com/752444/13:05
niemeyerIt's about 1km from the coast inwards13:05
rogniemeyer: looks lovely13:05
niemeyerrog: yeah, now the problem is a different one13:05
niemeyerrog: The merge proposal is off since we renamed the branch13:05
rogniemeyer: ah, because lp:goyaml is just an alias, right?13:06
niemeyerrog: That's right.. the merge proposal is still against lp:~niemeyer/...13:06
niemeyerrog: You can let it know by hand of the target, and it will work13:06
rogniemeyer: that would be better than making a new proposal?13:06
niemeyerrog: I guess.. it'd avoid having you jump back and forth removing the previous one in lp and cr13:07
rogniemeyer: we might have implemented lbox submit... which would submit to the wrong place :-)13:07
niemeyerrog: Indeed.. luckily we won't be renaming things like that very often13:08
rogtrue13:08
rogniemeyer: so the original target was lp:~niemeyer/goyaml/goyaml ?13:08
niemeyer /trunk at the end13:09
rogah, of course13:09
rogniemeyer: ok, i did the propose, but the codereview diffs seem unchanged13:11
rogniemeyer: https://codereview.appspot.com/5432068/13:12
rogahhh f*!#13:12
* niemeyer waits for the bomb13:12
rogniemeyer: i have to change the target... of course!13:12
niemeyerrog: Ah, indeed :-)13:12
rogniemeyer: otherwise i don't see the diffs against the target i've just pushed to13:13
rogdoh!13:13
rogniemeyer: that's better: https://codereview.appspot.com/5431087/13:16
niemeyerrog: WOohay!13:16
niemeyerrog: Done13:18
rogniemeyer: "branches have diverged". dammit!13:20
niemeyerrog: I don't know what your doing, but the way this generally works is that we all have a local copy of trunk..13:23
niemeyerrog: Before merging a branch, pull from the remote to get the latest changes,13:23
niemeyerrog: Then merge and push13:23
rogniemeyer: yes, that was silly - i forgot the merge step.13:24
rogniemeyer: i thought the problem was because the trunk had changed name13:25
rogniemeyer: submitted13:26
marcoceppiIs there anyway to catch if a relation has been broken?13:34
koolhead11hi all13:41
mainerrormarcoceppi: Can't you do that with juju status?13:42
koolhead11Is there a specific documentation i should look at for running juju in my existing openstack infrastructre?13:42
koolhead11the documentation i am looking at is https://juju.ubuntu.com/docs/getting-started.html13:44
niemeyermarcoceppi: You mean within a charm?13:44
marcoceppiniemeyer: Yes13:44
niemeyermarcoceppi: Yeah, there's both relation-departed and relation-broken13:45
niemeyermarcoceppi: https://juju.ubuntu.com/docs/charm.html#hooks13:45
niemeyermarcoceppi: departed is likely what you want13:45
roghow do we pass initialisation options to zookeeper?14:10
roglooking in juju/providers/common/cloudinit.py, i can see how zookeeper gets run (zookeeperd package, thanks hazmat), but i can't see how we configure the zookeeper adress, for example.14:13
rogfwereade: ^14:13
hazmatrog, the zk address is the default address port 2181 on all ifaces14:13
fwereaderog, hazmat beat me to it :)14:14
roghazmat: so the answer is "we don't"?14:14
hazmatrog, its pretty common for daemons to bind to all available interfaces on their standard port14:14
hazmatrog, yup14:14
roghazmat: ok, that makes more sense now14:14
roghazmat, fwereade: i couldn't work out how ZOOKEEPER_ADDRESS was getting through, one way or the other. there's so much care taken to make it configurable...14:15
hazmatrog, ZOOKEEPER_ADDRESS is different.. its used to pass the ip:port info to agents, its passed on as an env variable to the agent process14:17
hazmatits not used to configure zookeeper but the things using zookeeper14:17
* rog nods14:17
hazmatmarcoceppi, -departed is called when any unit of the related service is removed, -broken is called when the relation is removed14:21
hazmatkoolhead11, the ec2 provider is used for openstack, there's an example in the list and on askubuntu14:21
koolhead11hazmat: i googled and found one such thread started by jcastro14:23
koolhead11but no much info.14:23
roghazmat: juju/control/initialize.py was the confusing bit. ZOOKEEPER_ADDRESS will never be set there, right?14:23
* koolhead11 goes back to askubuntu14:23
roghazmat, fwereade: or is there another subtlety i'm missing?14:27
fwereaderog, sorry, let me check what's going on there14:28
fwereadeif it happens to be set it will be used14:28
fwereadebut:14:28
fwereade    zk_address = os.environ.get("ZOOKEEPER_ADDRESS", "127.0.0.1:2181")14:28
fwereadedoes that answer your question?14:29
fwereaderog: ^^14:34
rogfwereade: yes, i saw the default. i just wondered if there was ever an occasion when the default would not be used14:35
rogfwereade: and i *think* the answer is never.14:35
fwereaderog, offhand, I think so too14:35
rogfwereade: good. i'm not too far off base then.14:36
hazmatrog, the default is not used lots14:40
hazmatrog, the common way the value is passed is via an environment variable14:40
hazmatwell at least for non bootstrap nodes14:40
hazmatoh.. sorry14:40
roghazmat: that was my point - this *is* the bootstrap code14:41
niemeyerAnd that's lunch time..14:41
hazmatrog, indeed its always the default there14:41
roghazmat: cool. a comment (or even removing the env var reference) might help naive folks like me to understand things, i guess.14:41
marcoceppihazmat: So I've got a service that requires mysql. I want to capture when that interface goes away. So -broken if I were to remove the relations? Also, what information is passed to the hook?14:49
hazmatmarcoceppi, not much, the standard hook information + rel info (JUJU_RELATION), but afaicr its not possible to interrogate any of the remote settings via the hook cli api, because their already gone, the unit can interrogate its own settings though14:52
marcoceppiI just need to find the hostname of the hook that broke off, is that still available in: relation-get private-address15:02
jcastromarcoceppi, bruno had ftp ready for review didn't he?15:11
marcoceppijcastro: Not that I'm aware of, last I heard he was starting on it but I've been in and out all weekend15:14
jcastrohe was ready for review as he was asking me how to tag it, but I can't find his branch. :-/15:14
marcoceppiNeither can I :\15:15
hazmatmarcoceppi, its not15:27
hazmatmarcoceppi, there could be a dozen remote hosts15:27
hazmatdepending on the type of relation and which endpoint it is15:28
jcastrohazmat, do you have irc topic powers here?15:28
hazmatjcastro, i do15:28
marcoceppihazmat: So I'm creating a phpMyAdmin charm and it's able to join to multiple mysql servers15:29
marcoceppiI'd like to capture when one goes away and remove it from the cfg15:29
hazmatjcastro, something in particular you'd like in it?15:30
hazmatmarcoceppi, are they separate relations or a single relation?15:31
jcastrohazmat, office hours please, something like: Office Hours (1600-1900UTC)15:31
jcastrojust tacked on at the end or whatever15:31
=== hazmat changed the topic of #juju to: http://j.mp/juju-florence http://j.mp/juju-docs http://afgen.com/juju.html http://j.mp/irclog Office Hours (1600-1900UTC)
marcoceppiWell, that's what I'm a little baffled about. Can you spin up multiples of the same service. Like have two separate MySQL services running independently of eachother within the same bootstrap? Or would you simple add-unit15:33
hazmatmarcoceppi, you can have multiple mysql services in an environment, and if you do add-unit you'd be adding units to the existing service, something like mysql doesn't really support multiple units of the same service outside of a master slave setup, which i believe the charm models as two separate mysql services with a master/slave relation, and in that case you can add-unit for slaves15:44
* hazmat grabs lunch15:48
jimbakerit's ironic that the actual observed time to failure for my UPS was less (which happened sometime last night) than actually seeing a power failure. in fact no power failures here in the 7 years i lived here16:04
rogjimbaker: at least when your UPS fails you've still got mains power...16:08
jimbakerrog, indeed, i do have amazingly reliable and inexpensive power here. time to build a data center in the basement ;)16:13
jimbakerspeaking of which, there was an article i saw recently about using data centers to heat buildings16:13
rogin cloudinit.py, add_ssh_key states that at least one ssh key is required. why is that? wouldn't it be ok in theory to have a juju machine that wouldn't accept any incoming connections?16:14
rogjimbaker: i've heard that16:14
rogjimbaker: mind you, i'm not sure i'd situate a data centre next to peoples' house - they have a nasty habit of burning down.16:14
rogs/house/houses/16:14
jimbakerdata furnaces - http://www.nytimes.com/2011/11/27/business/data-furnaces-could-bring-heat-to-homes.html16:14
rogjimbaker: not gonna be very good in summer though...16:15
jimbakerapparently as long as the ambient air temp is < 95 deg F, it should work for the servers, just venting heat outside at that time, instead of inside16:16
jimbakerwith maybe the exception of one or two days a summer, that would be the case for my house16:16
rogjimbaker: it's a nice idea. i wonder how data protection laws would apply to the home owner...16:19
rogjimbaker: do you know the answer to the above question, BTW?16:19
rogjimbaker: i'm just wondering what would break if we allowed no ssh keys16:20
roghazmat?16:20
jimbakerrog, i don't know the answer to cloudinit above. maybe just fix cloudinit for this case?16:20
jimbakerworth taking a look at its codebase16:20
rogi'm not sure that cloudinit itself requires any ssh keys16:21
rogbut i may be mistaken16:21
* rog goes to check16:21
jimbakerinteresting related point about txaws not verifying ssl. didn't know this. sounds like another reason to move to boto, which does have such support16:21
rogboto?16:21
jimbakerit's pretty much the standard library for working with aws16:22
rogjimbaker: well, i'm using goamz :-)16:23
rogjimbaker: when you say "not verifying" do you mean it doesn't check the cert chain?16:23
jimbakerrog, unlike txaws, it has extensive support for nearly all of the aws api. its disadvantages are that it is blocking (but easy to workaround with deferToThread, just like what we do elsewhere in the python version of juju)16:23
jimbakerrog, that's what i understand from niemeyer'16:23
jimbakers email16:24
niemeyerjimbaker: Have you checked if boto is testing for SSL certs?16:24
hazmatrog, the client needs ssh to work to connect  the bootstrap node16:24
niemeyerjimbaker: Also, it's not an easy transition16:25
jimbakerniemeyer, this is based on http://www.heikkitoivonen.net/blog/2009/10/12/using-m2crypto-with-boto-secure-access-to-amazon-web-services/16:25
roghazmat: so the bootstrap node needs to allow ssh. what about the other nodes?16:25
hazmatrog, they need it minimally to support ssh/scp/debug-hook commands16:26
roghazmat: ok, so nothing crucial relies on it. just wanted to check.16:26
niemeyerjimbaker: If you actually read the code there, you'll notice that it's not boto that is verifying the certificate16:26
niemeyerjimbaker: If you're going to fix it for boto, you can as well fix it for txaws16:27
hazmatrog, nothing internally no, except for client access to the bootstrap node16:27
jimbakerniemeyer, likely it's worth looking at both. again you raised a good point about txaws. at this point, i know that people use boto successfully in this way to verify ssl :)16:29
roghazmat: cool. so "You have to set at least one SSH key." should really be "The bootstrap node requires at least one SSH key. Without at least one SSH key, other nodes will not allow ssh access, e.g. ssh,scp, debug-hook" or something like that?16:29
jimbakertxaws has some users, but boto is used extensively16:29
jimbakerincluding with twisted as i understand it16:29
hazmatrog, those other subcommands are part of the interface juju exports16:30
hazmatjimbaker, boto is used by twisted?16:31
hazmater. with16:31
roghazmat: that's true, but they're not key to the infrastructure. i could imagine creating a high-security node that allowed no ingress. it wouldn't break anything to do that. thus "must" seems a bit strong.16:32
roghazmat: but YMMV of course16:32
jimbakerhazmat, i have seen boto + twisted in the openstack tests, for example16:32
hazmatrog, by that notion ssh keys wouldn't be required at all post a REST interface16:32
hazmatjimbaker, openstack doesn't use twisted anymore.. and its honestly its not a great example of proper usage of anything imo16:33
roghazmat: well, i guess REST implies some encryption 'cos we'd be using https, so yes, i'd agree.16:33
hazmatits getting better, but the codebase was originally a ball of mud with different styles and mix of sync/async  and library usage16:33
jimbakerhazmat, this was just in the tests. i know they are using gevent. probably want to find better examples before we decide anything :)16:33
hazmateventlet .. same difference though16:34
hazmatjimbaker, there are no twisted imports in nova16:34
roghazmat: i certainly think we should default to allowing ssh access, but i'm not sure it should be strictly required. and in fact, i don't see anything that checks the requirement now - it'll probably just work as is16:35
hazmatjimbaker, which tests?.. i don't know that we'll find many examples of twisted code bases using boto16:36
jimbakerhazmat, i will try to dig this up16:37
thervejimbaker, to throw some counter points 1) boto code base is relatively bad, and barely has any tests 2) most of the coverage of AWS API is useless to juju16:37
jimbakertherve, this is in fact a very good counterpoint16:37
jimbakerthe only thing we can say about boto is that it's heavily used across various impls of the EC2 api. whether or not that makes up, i don't know16:38
jimbakerthe best thing would be extensive usage in the wild + extensive unit testing16:38
jimbakeranyway, just bringing up as a possibility16:39
thervejimbaker, in the mean time, I'd be happy to help if there are problems with txaws :)16:40
hazmatjimbaker, i think i missed the context here, we're just talking about ssl cert checking16:40
roghazmat: no, it is checked.16:40
jimbakeri believe this in ref to https://bugs.launchpad.net/txaws/+bug/78194916:41
_mup_Bug #781949: Must check certificates for validity <txAWS:New> < https://launchpad.net/bugs/781949 >16:41
jimbakerwhich was filed by niemeyer16:41
therveI'll assign that to me16:41
jimbakertherve, thanks16:42
hazmatjimbaker, we have to be careful there, or at least only do it optionally, openstack setups don't nesc have valid certs16:42
niemeyerjimbaker: Time machine..16:43
jimbakerhazmat, sounds good16:43
jimbakerniemeyer, what do you mean?16:43
niemeyer<jimbaker> which was filed by niemeyer16:44
niemeyer... back in May.16:44
jimbakerniemeyer, yes, so that's good right? i'm just pointing this out to ensure it's the right bug in question16:45
niemeyerjimbaker: Yeah, I'm not saying it's bad in any way.. it was a just a bad joke16:45
jimbakerniemeyer, cool16:46
jimbakerhazmat, standup today?17:02
SpamapSwe don't verify the amazon cert right now?17:10
jimbakerSpamapS, as i understand it, yes17:12
SpamapSthats quite serious IMO17:12
SpamapStheft of AWS credentials could be *VERY* costly17:12
jimbakerSpamapS, indeed. think of the possible botnets17:12
SpamapSjust thinking of the possible bill17:13
marcoceppiSo, could I simply symlink upgrade-charm hook to the install hook?17:20
m_3marcoceppi: don't see why not... as long as you change idempotency guards to handle any re-install logic needed17:24
marcoceppim_3: Right, I just realized my install hook is idempotent and since the upgrade never runs the install again it would pretty much be exactly what the install does with the exception of a few things17:24
m_3marcoceppi: might have to add a couple of things like an initial "apt-get update" that you wouldn't normally need17:25
marcoceppiGood point17:25
m_3marcoceppi: lp:charm/ceph is an example of this17:30
m_3w/o the additional apt-get update :)17:31
marcoceppiCool I think this charm is about ready. Need to test it a bit more17:35
hazmatjcastro, this askubuntu thing is great ;-)17:35
hazmathmm.. i'm not sure if the unzip will respect symlinks or not17:36
m_3marcoceppi: awesome man!17:39
hazmatkoolhead17, you mentioned on askubuntu that you where able to bootstrap without a local ssh key.. afaik this isn't possible17:40
hazmatie. it should always error out if that's the case17:40
koolhead17hazmat: am trying to run juju on my existing openstack infra17:41
* hazmat nods17:41
koolhead17juju bootstrap gave no error17:41
koolhead17it was juju status which failed17:41
koolhead17with some ssh related error17:41
hazmathmm.. that's a regression17:41
koolhead17and i can see an instance running too17:41
koolhead17juju-default17:41
koolhead17i was wondering when the ssh access juju tries to acquire17:42
koolhead17verbose says with user ubuntu17:42
koolhead17what about the password?17:42
hazmatkoolhead17, it uses ssh keys everywhere, bootstrap should fail if there are no keys17:43
hazmatbefore launching an instance17:43
koolhead17by default images get password as "password" for user ubuntu and user root17:43
koolhead17on the ineiric cloud image17:43
koolhead17Oneiric17:43
koolhead17hazmat: i created a keypair and then i was able to pass juju bootstrap part17:43
koolhead17:D17:43
hazmatkoolhead17, juju doesn't reference api key pairs17:44
koolhead17hazmat: so what is the best way to get juju running inside openstack?17:44
koolhead17i had few other issues too.17:44
koolhead17like the instance gets acquired with internal IP17:44
hazmatkoolhead17, one moment, just trying to verify the key thing against an openstack install17:45
jcastrohi koolhead1717:45
koolhead17jcastro: hello sir. :)17:45
koolhead17i was looking for you17:45
jcastroawesome, I was looking for you, you go first!17:45
koolhead17about config related issue while running juju using openstack17:46
koolhead17i got it figured17:46
koolhead17:D17:46
marcoceppiApparently source isn't built into sh?17:46
koolhead17jcastro: like last release i would like to work with ubuntu server guide, let me know how can i help. last time i did final revision part17:47
jcastrokoolhead17, we could always use help writing charms17:48
jcastroand iirc at some point soonish hazmat will split the docs from juju itself17:49
koolhead17jcastro: hmm. i allready have few assigned but i had to work on sumthin else17:49
jcastrothe docs could really use a review17:49
hazmatkoolhead17, just tried it without a key.. it does fail.. http://pastebin.ubuntu.com/752729/17:49
hazmatjcastro, i've gotten some push back from that.. it should probably get brought up on list17:49
jcastrook17:49
jcastroI'll bring it up17:49
hazmatjcastro, cool17:50
koolhead17hazmat: i had similar error and what i did was ssh-keygen  to generate key for my local user17:50
jcastromarcoceppi, m_3, SpamapS: incoming new charm, teamspeak.17:50
m_3marcoceppi: dash...  Use '. file.sh' instead of 'source file.sh'17:50
koolhead17and when i executed ensemble-bootstrap it worked17:50
koolhead17its ensemble-status where am failing17:51
jcastromarcoceppi, oh hey was it you that was going to add new bling to the IRC bot? newly tagged "new-charm" announced here for review would be awesome.17:51
koolhead17and i know the reason17:51
hazmatkoolhead17, right but that error is that bootstrap won't work without a key.. which is different than status not working.. the latter is more that the key didn't get installed onto the instance17:51
m_3jcastro: just saw it pop up on the review queue17:51
jcastrorock and roll17:51
marcoceppijcastro: Yeah, and feed Ask Ubuntu questions into here17:51
jcastrom_3,  he's on a roll, he'll probably submit FTP tonight as well17:51
koolhead17hazmat: yes the later part <ensbemle status> where am stuck currently17:51
koolhead17does my defualt juju uses user "ubuntu" and passwd "ubuntu"17:52
koolhead17to connect to instance17:52
koolhead17?17:52
hazmatkoolhead17, can you pastebin the euca-get-console-output for that instance17:52
hazmatkoolhead17, there are no passwords just ssh keys17:52
koolhead17hazmat: am home now. i will do that once in office17:52
hazmatkoolhead17, which release is the image btw?17:52
koolhead17hazmat: oneiric cloud 64 bit17:53
koolhead17tar.gz file17:53
hazmatkoolhead17, k, i'm trying a full run on openstack now17:53
koolhead17i use cloud-publish-tarball17:53
m_3jcastro marcoceppi: autofeeding askubuntu questions here would rock!17:55
m_3of course, answering them here would be pretty cool too, but that might be... tough17:55
jcastroyou can always take a good question and answer that you answer here and post it there17:56
jcastroas a self-documenting thing17:56
* m_3 is more geeking out about about mup-style integration17:57
mplhmm, isn't there a way to have your authentified irc nick be linked to your askubuntu account? some sort of openid mechanism. then there could be a bot here to which we could feed questions and answers.17:57
hazmatkoolhead17, reproduced17:57
koolhead17hazmat: cool and?17:58
hazmatkoolhead17, still debuggin17:58
koolhead17am running it on same same nova system17:58
koolhead17and added all openstack related credentials in config.yaml file17:58
marcoceppimpl: No write access on the Ask Ubuntu API yes, so unless you emulated a web browser, logged in and maintained session for each user, etc (it would become messy quick)17:58
mpltoo bad.18:00
koolhead17hazmat: also is there a way to add cloud-init kind stuff in config18:01
koolhead17am saying this because when am running juju behind proxy it will fail18:01
hazmatkoolhead17, short answer, no, you can put a bug in for http proxy support though18:02
hazmatnot sure how that's related to cloud-init re proxy18:02
koolhead17hazmat: i was giving an example :P18:02
hazmatso on openstack cloud-init finishes fine, but it doesn't seem to install the key18:02
koolhead17hazmat: i have not tried cloud-init18:03
koolhead17its the juju status18:03
koolhead17where am stuck18:03
koolhead17with ssh connection error18:03
koolhead17i will paste exact error once am in office tommorow18:03
koolhead17hazmat: juju starts an instance with juju-defualt but its not able to ssh to it18:04
koolhead17after i execute juju status18:04
hazmatkoolhead17, yes, i understand the problem18:06
hazmatand i'm able to reproduce it18:06
koolhead17hazmat: cool. :)18:06
koolhead17i have 2 more issues which i was concerned about18:06
koolhead17by defualt when a instance starts it acquires a private IP18:06
koolhead17say 192.168.1.118:06
koolhead17we attach the instance to public IP18:07
koolhead17for communicating with outside world18:07
* hazmat nods18:07
hazmatkoolhead17, i do the same for interacting with our openstack cluster18:08
koolhead17now if i will use juju it means i need to have my internal IP connected to internet in order 4 juju to fetch pkgs18:08
koolhead17hazmat: but when i will say juju deploy mysql18:08
koolhead17it will go in background and do apt-get install mysqlserver18:08
koolhead17but since its not connected to internet it will fail18:08
marcoceppi_mup_18:08
koolhead17what is the way out18:09
hazmatkoolhead17, yes, but basic NAT traversal should allow for that, are you saying that the cluster has zero connectivity to the internet18:09
koolhead17hazmat: yes18:09
koolhead17the VM in internal nw18:09
koolhead17:-(18:09
hazmator that the openstack internal network isn't bridged? .. the fact that youc an communicate at all with the bootstrap node suggests otherwise18:09
hazmatkoolhead17, being in an internal network is fine if it has outbound access18:10
* m_3 needs charm review snippets :)18:10
koolhead17hazmat: which means it has to have internet access ?18:10
koolhead17:D18:10
hazmatkoolhead17, that or an internal apt proxy and image customization18:10
hazmats/proxy/cache18:11
hazmatsame difference18:11
marcoceppiWhich bot is _mup_?18:11
koolhead17hazmat: well am talking about 2 things differently18:12
hazmatmarcoceppi, launchpad.net/mup18:12
koolhead171. my cluster should know about proxy-server when am trying to orchestrate via Juju18:13
koolhead172. about this private IP and its neccesity to have internet connection.18:13
hazmatkoolhead17, so it took some time but the key was eventually installed on the bootstrap instance18:13
koolhead17hazmat: it failed in my case. :)18:14
hazmatkoolhead17, i wonder if it just takes some additional time18:14
koolhead17hazmat: you will have to share your yaml file with me then. :P18:14
koolhead17juju bootstrap does work without error18:15
hazmatkoolhead17, sure, and that means the instance is started, and that an ssh key was found18:15
koolhead17its juju status where i fail because of ssh credentials18:15
koolhead17hazmat: +118:15
hazmatkoolhead17, right, i had the same problem, but it did work after i waited a few minutes, i'm still not clear why since cloud-init had finished18:16
hazmatkoolhead17, which version of openstack are you using?18:16
koolhead17diablo18:16
koolhead17from ubuntu repositoray18:16
hazmatkoolhead17, i think it has something to do with a delay in associating the public-address to the instance18:16
hazmatkoolhead17, because the ssh host fingerprint changed18:16
koolhead17hazmat: the "-v" showed its associated with ip and instance ID18:17
hazmatkoolhead17, i know.. this is an internal to openstack18:17
koolhead17hazmat: what would you suggest and how should i proceed18:18
hazmatit shows the address correctly on the metadata, but the action of connecting the instance to the ip address is derived off an async activity18:18
hazmatkoolhead17, offhand i'm not sure outside of waiting, the question is verifying that the ip address is connected to the right instance, which means verifying openstack internal state18:20
hazmatasking on #openstack18:21
koolhead17hazmat: can i trouble you tomorrow once am in office18:21
koolhead17hazmat: so your saying you are able to run/execute juju status without failing from that ssh related issue18:21
hazmatkoolhead17, yes18:22
hazmatkoolhead17, i had the issue, i waited, it went away18:22
jcastromarcoceppi, I forgot a totally obvious service but added it to the spreadsheet, status.net aka. run your own twitter18:22
koolhead17hazmat: one more thing18:22
jcastrothat would be useful for organizations that want the benefits of microblogging but for internal reasons18:22
koolhead17does juju status results in connecting and getting some info from externa;l metadata server18:22
marcoceppijcastro: oh, duh! good call18:23
hazmatkoolhead17, it talks to the nova api endpoint not the instance metadata server18:24
hazmatkoolhead17, cloud-init does talk to the instance metadata server18:24
hazmatbut that's independent of status18:24
koolhead17hazmat: my internal nw has no internet in it.  :)18:25
koolhead17can that be the reason 4 fail18:25
hazmatkoolhead17, possibly.. i find that likelihood very strange though..  inbound access and outbound access via a NAT are different concerns18:26
koolhead17hazmat: i have to read much further and find some more things :)18:27
negronjlm_3: ping18:56
m_3negronjl: yo18:56
m_3ssup?18:56
negronjlm_3:  I just updated and tested the new mongodb charm with replicaset in oneiric but, I need another set of eyes to ensure I am not crazier than usual.  When you get a chance ( no rush ) can you test it.18:57
negronjl?18:57
negronjlm_3: In the meantime, can you tell me ( again ) where your hadoop charm for oneiric is ?  I am going to consolidate them all into one charm .18:58
m_3negronjl: sure thing on reviewing the mongodb charm18:58
m_3negronjl: lemme look to see that the latest is in trunk on the hadoop charms18:58
negronjlm_3:  ok on the hadoop thing ...18:59
negronjlm_3:  let me know if you need help testing the mongodb thing ( commands and such )18:59
SpamapSspeaking of oneiric ...18:59
SpamapSseems like we missed a huge set of work items at UDS18:59
negronjlSpamapS: do tel18:59
SpamapSwhich is.. develop or coopt tools to do releases18:59
negronjlSpamapS: do tell18:59
negronjlSpamapS: tools like charm create you mean ??19:00
SpamapSLike, we need to copy all of the branches from oneiric -> precise19:00
SpamapSand automatically backport new stuff from precise -> oneiric19:00
m_3negronjl: lp:charm/hadoop-{master,slave} have everything for oneiric19:01
negronjlm_3: thx19:01
m_3negronjl: the hard part is making sure we have a single ppa or other repo that is consistent across natty and oneiric (we don't currently)19:01
m_3negronjl: ppa:canonical-sig works for natty... ppa:mark-mims/hadoop works for oneiric19:02
negronjlm_3:  To test the sanity of the "one hadoop charm to rule them all" theory, I'll start by if,then on the release to figure out which ppa to use... I'll work on the one ppa later.19:02
negronjlSpamapS: We can still do this but, I think it'll have to be done manually at first19:02
m_3negronjl: I was planning on building all the dep projects into ppa:mark-mims/hadoop for natty,19:02
negronjlm_3:  The natty one is already in LP ... let me get it for ya19:03
m_3negronjl: but we should get the right "partner" repo working correcly19:03
m_3for both!19:03
SpamapSnegronjl: ugh. I really don't want to get into a situation where we are manually doing anything other than merges of conflicting changes19:04
negronjlm_3: that shouldn't be hard as long as both packages work the same19:04
hazmatnegronjl, did you publish the new charm to lp?19:04
m_3negronjl: there's a blueprint for packaging the whole bigtop stack BTW...19:04
hazmatnegronjl, re mongodb19:04
negronjlhazmat: i did19:04
negronjlhazmat: lp:charm/mongodb19:04
hazmatnegronjl, where ? ah.. it needs to be under charm/oneiric/mongodb for the store or charm browser to find it19:04
hazmatie. its missing the series19:04
hazmater.. actually... charm/oneiric/mongodb19:05
negronjlhazmat: I just updated the lp:charm/mongodb so, if you saw it an hour ago, you should see it just fine now.19:05
negronjlhazmat: I'll double-check the placement in LP ... please hold19:05
hazmatnegronjl,  maybe i'm missing something.. i see charm/thinkup19:05
hazmat and oneiric/postgresql19:05
negronjlhazmat: https://code.launchpad.net/~charmers/charm/oneiric/mongodb/trunk19:05
m_3SpamapS: is the backport process anything more than commit hooks atm?19:06
jcastronegronjl, can we catch up this week wrt. the charms you are working on? I need a sync up. (Not high priority)19:06
hazmatnegronjl, maybe this was just very recent (less than 15m) ?19:06
negronjlhazmat: more like 3mins ago :P19:06
hazmatnegronjl, ah.. that would explain it19:06
negronjlhazmat: you're so 3 minutes ago :D19:06
SpamapSm_3: commit hooks would be too automatic.. we need a way to say "I just broke backward compatibility" and stop auto-backporting of a branch.19:06
negronjlSpamapS: I understand that we want to do this automatically but, I for one, have no idea on the all of the implications of this and, by doing it manually once, we should be able to learn from the mistakes that we make, issues, etc. and create a "sane" automated process for this.19:08
m_3negronjl: so I still have a build VM set up... can dput the packages to a new repo pretty easily I think... just lemme know19:08
negronjlm_3:  Let me add you to the existing repo ... let see if it works that way ... hold on19:08
m_3no merge conflicts => "it's golden, ship it" ;)19:09
SpamapSnegronjl: maybe the answer is to open the precise series, without actually making it the dev focus.. so lp:charm would still be oneiric until precise releases....19:09
SpamapSmeh19:10
SpamapSsee we need a face to face for this19:10
negronjlSpamapS: G+ ?19:10
m_3negronjl: it might be best to _start_ with your plan of conditional repos based on series... get one charm to rule them all... then get one repo to rule them all as a next step.  gets us to working state soonest I think19:10
SpamapSMaybe we should drop the notion of the series altogether. If the charm is in lp:charm, it needs to work on all supported Ubuntu releases...19:11
negronjlm_3:  I'll take a look at it either way and see where the path of least resistance is19:12
SpamapSand if we just can't do that for some release of Ubuntu.. then we can create a series-specific branch for that charm and that release.19:12
negronjlSpamapS: I think that's the ultimate goal but I also think that the lp:charm charms will need to be heavily reviewed ( modified ? ) by us to ensure that we work all of the kinks out19:12
SpamapSnegronjl: s/reviewed/tested/19:13
SpamapSThats where the automated testing bits come in19:13
marcoceppiDoes the db-admin relation work?19:13
marcoceppion the MySQL charm19:13
negronjlmarcoceppi: It does ... I've used it19:14
* m_3 fantasizes about automated testing...19:14
* SpamapS tosses a bucket of cold water on m_319:15
SpamapSfocus!19:15
m_3marcoceppi: haven't reviewed/tested your changes yet if that's what you mean19:15
* m_3 didn't say "humps the leg of automated testing..."19:16
negronjlmarcoceppi: ahh ... didn't know if you were asking about changes that you may have made to the mysql charm ... I haven't tested any changes but, in the not so distant past, I have used the mysql-admin interface.19:17
negronjlmarcoceppi: ... and that worked.19:17
marcoceppiI wasn't nessisarily, all I did was change the metadata.yaml file to give db-admin a mysql-root interface19:17
marcoceppisince db-admin and db are the same hook file19:17
negronjlmarcoceppi: I'll be working on that charm soon ( to use it for CloudFoundry as opposed to a cf only one ) so, if I find any weirdness, I'll let you know19:18
marcoceppiI just got a state error when adding a relation to db-admin, was looking for an excuse to not be my fault19:18
negronjlmarcoceppi: lol ..... aren't you the new guy ??? then it's always your fault :)19:20
marcoceppihum, does db-admin need to create a database?19:21
negronjlm_3:  https://launchpad.net/~canonical-sig/+archive/thirdparty  <---- natty hadoop packages19:21
negronjlmarcoceppi: i don't think so,  it should be for you to get root to a db19:21
marcoceppiCool, I'll update that as well19:21
negronjlmarcoceppi: ahh ... I now remember why I made another charm for mysql19:21
negronjlmarcoceppi: mysql-admin gives you root to a db only ... CloudFoundry needs root to it all19:22
marcoceppiwhat. what is the point of that?19:22
marcoceppiI guess there were more changes needed than I though19:23
marcoceppit19:23
negronjlmarcoceppi: the point of what ?19:24
marcoceppihaving a db-admin if you don't get an admin account?19:24
m_3negronjl: thanks... I'll try a dput and let you know19:25
negronjlmarcoceppi: you get admin of a particular DB19:25
m_3negronjl: so there were lots of other repos I had to add b/c of build deps (listed http://goo.gl/n5T2i)19:29
m_3negronjl: those'll have to be added as well for oneiric19:29
negronjlm_3:  ok ... hit it19:29
* negronjl crosses fingers and prays that it works 19:29
* m_3 does too19:30
_mup_Bug #897360 was filed: Separate docs from source <juju:Confirmed> < https://launchpad.net/bugs/897360 >19:36
hazmatniemeyer, any thoughts re my comments on GC19:37
niemeyerhazmat: haven't read them yet19:37
m_3negronjl: dputs rejected: "Signer has no upload rights to this PPA."19:42
negronjlm_3: what's you lp username ?19:42
m_3mark-mims19:42
negronjlm_3:  the group belongs to zaid_h and I don't have access either ... I'm trying to get you access ....19:45
negronjlm_3:  if that doesn't work, maybe we can do it the other way around and I'll dput the packages from canonical-sig into your ppa19:46
niemeyerhazmat: Didn't get anything in my inbox regarding the issue?19:47
hazmatniemeyer, interesting i don't see it on the mp19:48
hazmatoh.. i have to setup my postfix post ssd install19:49
hazmatdoh19:49
m_3negronjl: I'll have to clone up a new natty instance and rebuild the packages there...19:49
hazmati was wondering why  i haven't gotten any responses to emails recently19:49
niemeyerhazmat: Cool, np19:50
niemeyerhazmat: I'm stepping out for some exercising, but will check when I'm back19:50
hazmatniemeyer, cheers19:50
m_3negronjl: lemme do other charm stuff while waiting for Zaid...if he doesn't get back later today I'll spin up the natty builds.  rather not have to go down that rabbit-hole if we don't have to.  Do you need this now for anything?19:52
negronjlm_3: not at all .. no rush19:52
m_3cool man... thanks19:53
m_3hazmat: your mail from last week is being delivered19:55
hazmatm_3, yeah.. just flushed the system19:55
hazmatused a clean install on my new ssd forget to setup my postfix properly.. but its pretty rockin, the ssd that is19:56
negronjlm_3: no rush ... you got access:  https://launchpad.net/~canonical-sig/+members#active19:56
m_3negronjl: thnks19:57
* negronjl is out to lunch19:58
_mup_juju/expose-refactor r421 committed by jim.baker@canonical.com20:49
_mup_Merged trunk20:49
robbiewSpamapS: call time?21:00
jcastrom_3, hey so, no worries21:06
jcastrobut another incoming charm. :)21:06
marcoceppiIf someone has time to review https://code.launchpad.net/~marcoceppi/charm/oneiric/mysql/db-admin-relation-fix/+merge/83690 I'd be greatful21:20
m_3jcastro: cool... on it21:31
marcoceppiWas there ever a decision on what to do about cryptographic/source checks when being checkedout from a remote repository (git, svn, etc)?21:35
hazmatmarcoceppi, i think if its via a secure channel it was fine21:50
hazmatniemeyer, can lbox setup the reitveld for an existing branch?21:51
* negronjl is back21:52
m_3negronjl: I think that the ppa:canonical-sig/thirdparty is going to work21:53
m_3negronjl: it's still building, but it looks like it'll succeed21:53
negronjlm_3: cool ...  it should make things simpler :)21:53
m_3much21:53
m_3we'll need to test it tho21:53
m_3negronjl: ok, it finished21:57
negronjlm_3:  I'll start testing a bit later21:57
m_3cool... me too21:58
negronjlm_3:  I'm changing the hadoop charms so you can specify your own MapReduce job ... I am building on top of yours.  I'll share later when I am done testing but, it should make charm usable with any MapReduce job that can be downloaded21:59
m_3negronjl: yeah, I was going to talk to you about that... it was built for the demo but should totally be generalized22:04
m_3negronjl: config has lots of specific stuff too22:05
negronjlm_3: I think I am going to follow your logic of having a script .... Maybe in the config.yaml file, I'll just have a var that is a script that will do everything that is needed.  Each job is pretty unique and I don't think I'll be able to generalize enough to make it work for a big chunk of jobs out there.22:06
m_3negronjl: perhaps a url for pulling the MR jar even?22:06
negronjlm_3: I'll have an example and we can all work on it from there ....22:06
m_3negronjl: we can also remove the job-related scripts too... whatever makes sense22:06
negronjlm_3: removing the job-related stuff ... sure ... I am also toying with the idea of creating a new interface that will provide the job ... not sure how yet but, it would give us the hadoop charm and then we can have something separate ( a MapReduce charm ?? ) that will actually execute the job ( or provide the necessary stuff so the master can execute the job ).22:07
negronjlm_3: many ideas to play with ... I'll do some stuff and we'll see where I end up.22:08
negronjlm_3: if you have ideas, let me know too :)22:08
m_3cool... happy to help, just lemme know22:08
m_3maybe g+ sometime after the first pass22:08
negronjlm_3:  will do22:09
SpamapSWhat does one do if they don't have an automated install/config of HDFS/Hadoop ?22:12
SpamapSssh in and scp your .jar's manually?22:12
negronjlSpamapS: yes22:12
SpamapSinteresting22:12
negronjlSpamapS: but, the majority of people that use MapReduce often have scripts that doe this stuff22:12
negronjlSpamapS: My idea is to provide a mechanism by which people can put their scripts in a charm ( interface or otherwise ) so, they can use the charm to deploy hadoop but still have their scripts to run their jobs22:13
negronjlSpamapS: IMO it's a more realistic use of the charm22:13
m_3negronjl: very similar to a capistrano integration with a charm22:13
negronjlm_3: not sure about that ... I have to options that I am about to investigate :22:14
negronjlm_3: 1. In the charm, provide a variable where the charm will download and execute a script ( dangerous ?? )22:15
negronjlm_3: 2.  Provide an interface where, upon relation, will do the same22:15
negronjlm_3: two sides of the same coin ... they both will end up executing arbitrary code22:15
m_3I had always imagined option 1, but that doesn't really mean anything... just haven't thought too much about it22:15
m_3can totally see long-running hadoop services22:16
negronjlm_3: Option 1 is probably the easiest to implement but, what I really want to do is completely decouple the charm from the job22:16
m_3and short-running job services that attach, run, then detach22:16
negronjlm_3:  Ideally in option 2.  I can clean up the job lef-overs upon breaking the relationship22:17
m_3that's a new angle too btw... good to blog about22:17
negronjlm_3: this way the same hadoop cluster can be used for multiple jobs22:17
m_3negronjl: yup22:17
negronjlm_3:  It all sounds cool  .... IF WE CAN GET IT TO WORK :P22:17
* m_3 grins22:17
m_3marcoceppi: yo22:28
marcoceppim_3: hey22:28
brunopereira81hey22:28
m_3marcoceppi: I assume the phpmyadmin package is broken?22:29
marcoceppiI guess?22:29
m_3brunopereira81: hi22:29
marcoceppim_3: I opted to use source, because source is cooler than apt. And it appeared there was an issue with deb-conf22:30
m_3marcoceppi: ah, yeah that's what I was wondering22:30
marcoceppiIs that okay? I thought charms kind of favored source over debs since the repo tends to lag behind a little22:31
m_3it's okay for a charm22:32
m_3charms should prefer packages over source22:32
m_3for all sorts of reasons22:32
m_3I'd say if the package works, use it... otherwise pull from source if the package is broken22:32
m_3depends on what you want with the charm though22:33
m_3for charms in general, anything goes22:33
m_3for charms in lp:charm that we're committing to maintain, etc, etc...22:33
marcoceppigotchya22:34
m_3SpamapS: you have an opinion on this?  i.e., pulling from source when a package is available22:34
marcoceppiI've been setting up RSS feeds for each of the upstreams that I've made charms for. So I knew when to update the charm22:34
SpamapSsorry I was making a red bull run to 7-11 .. reading22:34
m_3marcoceppi: cool, yeah, that's important... especially if they're not publishing hashes and we're hashing them ourselves22:35
marcoceppiyeah22:35
* marcoceppi shakes fist at mojang22:35
SpamapSin general, repo > ppa > source because you have better integration and testing22:35
m_3condition for acceptance into lp:charm?22:36
SpamapSrepeatability and providing updates is a tough one there22:36
SpamapSif your charm does a source install, it needs to make it easy to update the software.22:37
m_3marcoceppi: does the package work in this case or is it really broken?22:37
SpamapSI was thinking actually that we could do that with a convention of having source-version as a config key22:37
marcoceppiSpamapS: Should that be implemented in the charm-update hook?22:37
SpamapSupgrade-charm, imo, should almost always just call install ;)22:37
marcoceppim_3: I haven't tested personally for the deb-conf errors. The package installs for local mysql22:38
SpamapSso, default: would be the version you tested it with..22:38
m_3wow, source-version config key implies a lot of boilerplate work in every charm22:38
brunopereira81m_3: teamspeak3 installs and runs on local repo, have some time to debug that part? Rest of sugestions will be implemented and updated asap (thx for the review) but at the moment the "not spinning up" is my main prio I would say.22:38
SpamapSor if there are security updates upstream, the version with the updates.22:38
SpamapSm_3: charm-helper :)22:38
m_3SpamapS: right22:39
m_3brunopereira81: sure... sounds good22:39
SpamapS. /usr/share/charm-helper/sh/updates.sh22:39
SpamapSactually22:39
SpamapS. /usr/share/charm-helper/sh/config-source-version.sh22:40
m_3brunopereira81: gimme a sec to finish up this review22:40
SpamapSthis is why I'd say just use packages22:40
marcoceppiSpamapS: makes sense22:40
SpamapSif the phpmyadmin package isn't behaving properly, then its buggy and we should fix it22:40
marcoceppiI22:40
marcoceppiI'll take a look at it and update the charm if needed22:40
SpamapSI recall there being issues with dbconfig-common and the way it works...22:41
SpamapSbecause you have to have a valid db connection before it finished configuring..22:41
SpamapSmarcoceppi: the charm iself doesn't have to do anything to update the software.. but there needs to be a prescribed, single way to update/patch the software22:44
SpamapSAnd since packages are *built* to do that, using a PPA (owned by charmers) seems the logical way to get that done.22:45
marcoceppiSpamapS: Well if install hook always has the the method of getting the latest version (whether via update/upgrade or by source/compile) having it be idempotent and having upgrade-charm call it makes sense22:45
marcoceppiOkay, so that statement really is more on the side of source/compile22:45
SpamapSmarcoceppi: I don't like the idea of it always getting the current version22:46
SpamapSmarcoceppi: that means you change behaviors whenever upstream releases.22:46
SpamapSand its entirely possible the PHP or something else that you have stops working.22:46
SpamapSmarcoceppi: so, a default version, with a way to override it, makes sense.22:47
marcoceppiAs in a config hook? use_upstream22:47
marcoceppiare config hooks made available in the install hook?22:47
m_3perhaps separate charms?  phpmyadmin-latest -vs- phpmyadmin22:47
SpamapSmarcoceppi: config-get is always available, so you can just call config-changed22:48
marcoceppiWell, when I say current (latest) I mean the latest available in the install hook. so not latest.tar.gz, but x.y.z.tar.gz If a new release is available then the maintainer of the charm should update the install hook, test, then upload22:49
marcoceppithen upgrade-charm hook should be written in a fashion that either creates an upgrade path, etc22:50
marcoceppiI just talked myself out of an argument. Because this is starting to sound like a PITA22:50
marcoceppiSo, in this case, phpMyAdmin package is broken. I should repackage in a PPA owned by charmers?22:51
SpamapSmarcoceppi: ;)22:51
SpamapSmarcoceppi: you just reinvented dpkg!!22:51
SpamapSwell done22:51
* SpamapS is always reinventing things that have existed for years.. its like a game22:52
marcoceppiheh, I'm really good at it :)22:52
SpamapSmarcoceppi: I don't think the packaging is broken.. we just need to put some thought into it.22:53
marcoceppiI'll take a look at it again when I get home22:53
* marcoceppi heads home22:53
SpamapSmarcoceppi: the long standing bug with dbconfig-common is that you must have mysql running before you can enter config details about your mysql connection to phpmyadmin..22:53
marcoceppiCouldn't you just setup a local MySQL instance, fake a complete setup, then manually inject settings to config.inc.php?22:54
SpamapSmarcoceppi: it fails configuration otherwise. So.. the answer to that is, don't let dpkg configure phpmyadmin until you've related to a mysql server.22:54
marcoceppiAs it stands now, the charm can accept multiple db-admin relations if you have multiple mysql charms running in an environment22:54
marcoceppiI'm not sure how dbconfig-common will handle that22:54
SpamapSphpmyadmin only allows one database right?22:55
marcoceppinope22:55
marcoceppiyou can setup multiple server relations22:55
SpamapSwell then dbconfig-common is probably the wrong solution to configuring phpmyadmin22:55
SpamapSmarcoceppi: head home, I'm going to look at the package now.. it sounds like we may be able to just ignore dbconfig-common entirely.22:55
marcoceppiSpamapS: \o/22:56
SpamapSas much as I despise phpmyadmin.. its a godsend for some people. :)22:56
marcoceppiyeah22:56
marcoceppiI think it'll be nice for juju, if you want to spin up really quickly to investigate mysql :)22:56
SpamapSwe should support the shared-db relation in the charm too22:57
* m_3 likes that it can be spun down just as quickly :)22:57
SpamapSso you can give people a web instance to inspect a single database22:57
m_3yeah, there's real use for that case22:58
marcoceppiSpamapS: Good point, I'll take a look at that too, tonight22:58
m_3with a read-only config option?22:58
* marcoceppi heads to the metro22:58
m_3marcoceppi: later man... charm looks great in general22:59
SpamapSm_3: that would be cool23:00
brunopereira81m_3: after deploy state: started and I can connect to the server with the client and use the pre-set admin token, restart service is implemented and metadata is fixed on next push23:03
brunopereira81m_3: any idea where it might have goten stuck at?23:03
m_3marcoceppi: tried to capture most of these suggestions into the review23:09
m_3brunopereira81: ok, lemme back up and recycle my env23:10
brunopereira81;)23:10
* m_3 needs more than 2 ec2 accounts23:10
negronjlm_3:  We're gonna have a hadoop-mapreduce charm23:26
negronjlm_3:  This charm will be in charge of the job ( setup, execution, reporting, cleanup, whatever ).23:27
negronjlm_3: After this, we can then deploy another mapreduce charm to the same hadoop cluster and have it work just fine as the previous mapreduce charm should have cleaned everything up.23:27
m_3negronjl: I like it23:28
negronjlm_3: After a while, I'll be working up multiple mapreduce jobs per cluster23:28
m_3negronjl: multiple same time?23:28
m_3ha23:28
negronjlm_3:  Not that I've ever done that but, I figure I would try anyway :)23:28
m_3really fits in nicely with the whole HaaS thing23:28
negronjlHaaS ??23:28
m_3at hadoopworld, there wer plenty of big folks23:28
m_3like jpmorgan23:28
negronjlHadoop as a Service ?23:28
m_3who have a hadoop services group23:28
m_3yeah23:28
negronjlm_3: cool.... we'll be able to easily provide that :)23:29
m_3they provide hadoop services to different business units23:29
negronjlm_3: I just need to make a new mapreduce job ... terasort is getting kind of old23:29
m_3drove hdfs security patches and similar23:29
negronjlm_3:  any ideas ?23:29
m_3I bet!23:29
m_3the cisco talk had a bunch of cool benchmarks23:30
m_3lemme dig it up23:30
m_3brunopereira81: ping23:30
negronjlm_3:  any idea where I can get the text for the bible ?23:30
m_3hmmmm... nope23:30
brunopereira81m_3 found it?23:30
negronjlm_3:   maybe I can use that as in_dir and mapreduce something23:30
m_3probably part of the gutenberg project :)23:30
m_3brunopereira81: was PMing you23:31
_mup_juju/expose-refactor r423 committed by jim.baker@canonical.com23:31
_mup_Renamed juju.state.expose to juju.state.firewall23:31
m_3brunopereira81: so my ts3 daemons aren't starting23:31
m_3you have a 'service stop' at the end of the install hook23:31
m_3but a 'service start' in start hook23:32
m_3what should be the initial state of the service?23:32
negronjlm_3: thx got it.23:32
m_3negronjl: figured it'd be kinda funny if the gutenberg proj didn't have a bible23:33
SpamapScrap reading about terasort reminds me that I 'm still supposed to record a screen cast of the hadoop thing23:35
* SpamapS *HAAAAAAATES* screen casting. :-P23:35
negronjlSpamapS: What is your deadline on that ?23:36
negronjlSpamapS: ahh .. does it has to be on the ODS thing ?23:37
SpamapSyeah it needs to be the same format23:42
SpamapSactually its a lot harder to do with EC2 or even canonistack23:42
SpamapShaving a local openstack was nice. ;)23:42
SpamapSlocal provider works, but looks weird because there's only 1 machine23:42
* SpamapS ponders setting up an openstack on his laptop to make it look more interesting.23:43
negronjlSpamapS: ahh ... I'm working on the hadoop-mapreduce charm ... thought you could use it but, It'll be done soon enough and I'm sure you'll be dying to make another screencast :P23:43
_mup_juju/trunk r418 committed by jim.baker@canonical.com23:58
_mup_merge expose-refactor [r=bcsaller,fwereade][f=873108]23:58
_mup_Refactors the firewall mgmt in the provisioning agent into a separate class.23:58

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!