[08:46] mornin' [09:16] hi rog [11:13] Hello all! [11:14] Hello. [11:20] niemeyer: welcome home! [11:24] rog: Thanks! :) [11:27] niemeyer: welcome back [11:27] TheMue: Thanks! [11:49] rog: lbox seems to be working well [11:49] rog: You're right about pre-req.. the diff should come from the pre-req branch [11:49] rog: Will address this later [11:50] niemeyer: it's difficult though, because the prereq branch is a moving target. as is the target itself, i guess [11:50] rog: Yeah, that's probably not a big deal [11:50] niemeyer: it's a pity the target has to be on disk. [11:51] niemeyer: i've made a shell script that does quite a lot of the stuff i always need to do; i don't know how much would be generally applicable though. [11:51] rog: The main concern I have is that it feels easy to merge a branch without addressing the pre-req first, and as a consequence merging the pre-req without reviewing it [11:51] rog: I was going to address that in your email, but we can talk here as well [11:51] http://paste.ubuntu.com/752390/ [11:51] rog: It's actually not a pity [11:51] rog: This is standard distributed revision controlling [11:52] i just fetch the target every time now [11:52] into /tmp or somewhere [11:52] rog: You're used to the Go process, but their process is well sub-standard in that regard [11:52] rog: This feels quite bad [11:52] yeah, but it stopped me making the same mistake every time [11:53] rog: The usual way to develop software with any of the DVCS tools (bzr, git, hg, ..) [11:53] niemeyer: which was that i'd specify, say, ../go-trunk as a target, but i might have a different push target inside go-trunk [11:53] rog: is to fetch the pristine code locally, and work with branches on top of it [11:54] rog: Hmmm.. how do you mean? [11:54] rog: There's only one go-trunk? [11:55] i think i'd probably done: cd go-trunk; make-some-changes; push --remember lp:~rogpeppe/blah/foo [11:55] rog: Yeah, that's the issue [11:55] which was probably a silly thing to do, but it caught me out twiece [11:55] s/twie/twi [11:55] rog: yeah, don't do it.. you may end up even screwing up the real trunk by mistake [11:55] rog: branch locally [11:56] the other thing is when i want to edit some code that i've goinstalled [11:56] i want to do: cd $GOROOT/src/pkg/launchpad.net/goamz/ec2; edit; commit; propose [11:57] rog: We covered that at UDS [11:57] rog: Don't edit the pristine version [11:57] rog: Work in a real branch [11:59] yeah, it's just much easier to get things wrong in that situation, because i'll be using two packages pushing to the same location in GOROOT [11:59] and i have to remember where i put the darn branch [12:00] rog: Not really, that's the idea [12:00] also (and a different issue i think), i can't specify $GOROOT/src/pkg/launchpad.net/goamz/ec2 as a target [12:00] rog: Use different GOPATHs [12:01] i can't use GOPATH because gotest doesn't work [12:01] i'm still biting my lip on that one [12:01] i really want to use GOPATH! [12:01] rog: The two things are unrelated [12:01] rog: Sorry, let me rephrase that [12:01] rog: You don't have to keep your source code at $GOROOT [12:01] rog: That's the real point [12:02] rog: You can have as many branches as you want installing onto GOROOT, or even a different GOPATH for that matter [12:02] niemeyer: i think i do if i want to be able to use gotest on the source code [12:02] niemeyer: unless it's been fixed recently [12:02] rog: I use that scheme, and I use gotest [12:02] rog: Nope, no recent changes to that [12:03] rog: make install always installs to $GOROOT [12:03] niemeyer: hmm, so if i've got GOPATH set and i go into a source directory in GOPATH and gotest, it works? [12:03] rog: gotest doesn't use GOPATH.. you can use gotest locally irrespective of where your source lives [12:04] but i also like to be able to use goinstall [12:05] rog: Yeah, I like ponies too :) [12:05] because it means that when i upgrade Go, i don't have to manually go through running make on every package i might have forked [12:05] goinstall -a is really useful [12:05] (and has saved me lots of time) [12:07] anyway, my current scheme works ok - i keep directory $HOME/tmp/targets with a set of pristine targets, then before proposing, i update the target and use that [12:08] that means that i don't have to worry about whether i might have accidentally locally corrupted a target [12:12] niemeyer: here's an idea: in the file with the summary & description that's edited as part of the propose process, why not put a summary of all the details that propose has inferred? [12:12] then it's more obvious when you've got the wrong push target, missed the prereq, etc [12:13] rog: We can do that, but it doesn't solve the main problem, which is the danger of merging a branch that has a pre-req without addressing the pre-req first [12:13] rog: We can probably create some convention to avoid the problem.. I'll think about it some more [12:17] rog: I'll also tweak lbox to not force the description [12:18] rog: it would also be nice if you didn't have to give the target when re-running lbox propose. maybe "lbox update" would be a more appropriate name. [12:22] rog: The name is fine (you're still proposing), and in the setup I suggested you don't actually have to provide a target at all [12:22] rog: Because it's the parent branch [12:22] niemeyer: hmm. what it's not the parent branch? [12:22] rog: Sorry, I don't get the question [12:23] niemeyer: what if you're not pushing to the parent branch? [12:23] niemeyer: that is, if you've branched from another local branch, but want to use the original parent as target? [12:24] niemeyer: or perhaps i've misunderstood [12:24] rog: I see [12:24] rog: No, you got it [12:25] rog: Hmm.. this might help solve both issues perhaps.. [12:26] rog: I'm wondering how the workflow would look like if it _was_ the actual target [12:26] niemeyer: if what was the target? [12:26] the prereq? [12:26] rog: The awkwardness is that it'd prevent the base from being merged [12:26] rog: Right [12:27] rog: Since we'd want to merge the follow up on it [12:27] Probably a bad idea [12:27] yeah, it doesn't sound quite right [12:28] the other issue i came up against is that you can't have two prereqs [12:31] rog: Indeed, I've missed that before too [12:31] rog: That said, at some point it's easier to just hold off the branch a bit [12:31] rog: Even for everyone's sanity [12:33] niemeyer: maybe i should have just done that, and push each branch when you've LGTM'd the previous one [12:34] BTW when i did lbox propose to upload a change to goyaml-error-fixes, i got this: http://paste.ubuntu.com/752421/ [12:34] i wonder why it didn't find the previous merge proposal [12:34] rog: This would work, but it's a nice feature to be able to push dependent branches.. [12:35] rog: Maybe we can, by convention, not LGTM a branch that has pre-reqs before the pre-reqs themselves have been sorted [12:35] rog: This would solve the worries [12:35] niemeyer: a related possibility is to use an lbox command to do the final push [12:36] niemeyer: and that could check that the prereq had been pushed before doing it [12:36] rog: It found the branch itself as the landing target [12:36] rog: Can you please paste "bzr info" there? [12:36] yeah, i just worked that out [12:37] i know why it happened [12:37] rog: It sounds like your custom workflow is getting in the way there [12:37] it's because i'd deleted my local copy of the branch [12:37] thinking i could always re-get it from lp later [12:37] but by getting it again, i lost the original parent [12:38] http://paste.ubuntu.com/752425/ [12:38] rog: Yeah, that's it [12:38] Cool [12:38] niemeyer: i don't think it was because of my custom workflow [12:39] rog: It was.. [12:39] niemeyer: because deleting a local copy of the branch isn't customary? [12:39] rog: In traditional DVCS workflows you don't really kill the local branch while you're working on it [12:40] niemeyer: i actually didn't delete it - i carried on editing and committing towards a later merge request [12:40] it's because i'd deleted my local copy of the branch [12:40] thinking i could always re-get it from lp later [12:40] ?? [12:40] niemeyer: well, in my head it was deleted... [12:41] rog: In Bazaar's head too, apparently, since you got the branch again from Launchpad (the parent says so) [12:41] niemeyer: it had turned into a new branch, which i didn't want to interfere with [12:41] niemeyer: maybe there was another way i could have got it from launchpad so that the original parent was preserved [12:41] niemeyer: i didn't know that parentage was so important [12:41] rog: Just don't kill the branch while you're working on it [12:42] rog: The branch existence is important [12:42] niemeyer: yeah, i should just re-branch every time i do a merge request [12:42] niemeyer: rather than carrying on working in the same directory [12:42] rog: Yeah, every time you want to start a new line of development, re-branch [12:42] niemeyer: it's a bit of a pain because i have to lose all my editing state. [12:43] rog: FWIW, that's a normal impedance mismatch when getting into DVCS [12:43] rog: That bit of it is due to Bazaar's way of working with multiple directories [12:43] rog: This is going to be addressed soon [12:43] rog: With a git-like workflow, you can have multiple branches in the same directory [12:43] niemeyer: of course. other VCSs you'd still be in the same dir [12:44] rog: bzr is getting the same feature [12:44] niemeyer: i think you *can* do that now - someone pointed me at a way of doing it [12:44] niemeyer: but it's probably a hack [12:44] rog: http://doc.bazaar.canonical.com/developers/colocated-branches.html [12:47] niemeyer: anyway, regardless of glitches, it's awesome that we've got proper code review working! nice one. [12:47] rog: Totally, I'm very happy about that [12:49] niemeyer: one thing: i think lbox propose should complain if there are uncommitted changes. i often forget to commit! [12:49] niemeyer: (just like bzr push complains) [12:50] rog: Agreed [12:51] rog: I was already planning something like that.. will tweak the Delta interface on goetveld [12:52] niemeyer: BTW, i tried pushing to lp:goyaml and got "Transport operation not possible: readonly transport". is that because i'm not yet recognised as a member? [12:52] niemeyer: or do i have to re-branch, now that i am? [12:52] rog: No, just because of the URL [12:53] niemeyer: is it the wrong url? [12:53] rog: If you look at bzr info, you'll see the URL is a read-only one [12:53] rog: You can push explicitly to lp:~gophers/goyaml/trunk [12:54] where in the info (http://paste.ubuntu.com/752435/) does it say read-only? [12:54] ah, you mean the url for lp:goyaml? [12:54] so is lp:~gophers/goyaml/trunk an alias for lp:goyaml ? [12:55] niemeyer: or is there something else subtle going on here? [12:55] rog: It is now, but it wasn't at the time you branched [12:55] rog: Oh, hold on [12:55] rog: It's my fault, actually [12:56] rog: I've changed the project maintainer, but not the branch [12:56] rog: It's still pointing to my personal branch [12:56] niemeyer: ah, so bzr push lp:goyaml should work, in fact? [12:56] rog: So, let's do this.. push it to the ~gophers URL.. I'll tweak the official branch location after that [12:56] rog: It will, definitely [12:56] cool [12:57] niemeyer: pushed now [12:57] rog: Done, we have a new lp:goyaml [12:58] niemeyer: cool. [12:59] rog: You're right, btw, we need an "lbox merge" [13:00] fwereade: Hey! [13:00] heya niemeyer! [13:00] rog: Or "lbox submit" perhaps.. :) [13:00] good holiday? [13:00] fwereade: Yeah, awesome [13:00] niemeyer, cool, where did you go? [13:00] fwereade: afternoon guvnor [13:00] heya rog [13:01] fwereade: I went to João Pessoa, a very nice region in the northeast of Brazil [13:01] g'morning [13:01] fwereade: I went to João Pessoa, a very nice region in the northeast of Brazil [13:01] hazmat: morning! [13:01] heya hazmat :) [13:01] niemeyer, welcome back, sounds like a nice vacation [13:01] hazmat: Thanks! Yeah, it was very relaxing [13:02] Left the laptop at home for a change [13:03] niemeyer, ah a disconnected holiday, even better.. pics of those clear blue seas from pessoa look amazing [13:03] hazmat: Not entirely disconnected.. still had a phone with me.. but at least severely restricted, let's say ;) [13:04] hazmat: Yeah, in the last day we went snorkeling here: http://perlbal.hi-pi.com/blog-images/536061/gd/1264122522/Picaozinho-Joao-Pessoa.jpg [13:05] niemeyer: another occasion it didn't seem to find the original merge proposal: http://paste.ubuntu.com/752444/ [13:05] It's about 1km from the coast inwards [13:05] niemeyer: looks lovely [13:05] rog: yeah, now the problem is a different one [13:05] rog: The merge proposal is off since we renamed the branch [13:06] niemeyer: ah, because lp:goyaml is just an alias, right? [13:06] rog: That's right.. the merge proposal is still against lp:~niemeyer/... [13:06] rog: You can let it know by hand of the target, and it will work [13:06] niemeyer: that would be better than making a new proposal? [13:07] rog: I guess.. it'd avoid having you jump back and forth removing the previous one in lp and cr [13:07] niemeyer: we might have implemented lbox submit... which would submit to the wrong place :-) [13:08] rog: Indeed.. luckily we won't be renaming things like that very often [13:08] true [13:08] niemeyer: so the original target was lp:~niemeyer/goyaml/goyaml ? [13:09] /trunk at the end [13:09] ah, of course [13:11] niemeyer: ok, i did the propose, but the codereview diffs seem unchanged [13:12] niemeyer: https://codereview.appspot.com/5432068/ [13:12] ahhh f*!# [13:12] * niemeyer waits for the bomb [13:12] niemeyer: i have to change the target... of course! [13:12] rog: Ah, indeed :-) [13:13] niemeyer: otherwise i don't see the diffs against the target i've just pushed to [13:13] doh! [13:16] niemeyer: that's better: https://codereview.appspot.com/5431087/ [13:16] rog: WOohay! [13:18] rog: Done [13:20] niemeyer: "branches have diverged". dammit! [13:23] rog: I don't know what your doing, but the way this generally works is that we all have a local copy of trunk.. [13:23] rog: Before merging a branch, pull from the remote to get the latest changes, [13:23] rog: Then merge and push [13:24] niemeyer: yes, that was silly - i forgot the merge step. [13:25] niemeyer: i thought the problem was because the trunk had changed name [13:26] niemeyer: submitted [13:34] Is there anyway to catch if a relation has been broken? [13:41] hi all [13:42] marcoceppi: Can't you do that with juju status? [13:42] Is there a specific documentation i should look at for running juju in my existing openstack infrastructre? [13:44] the documentation i am looking at is https://juju.ubuntu.com/docs/getting-started.html [13:44] marcoceppi: You mean within a charm? [13:44] niemeyer: Yes [13:45] marcoceppi: Yeah, there's both relation-departed and relation-broken [13:45] marcoceppi: https://juju.ubuntu.com/docs/charm.html#hooks [13:45] marcoceppi: departed is likely what you want [14:10] how do we pass initialisation options to zookeeper? [14:13] looking in juju/providers/common/cloudinit.py, i can see how zookeeper gets run (zookeeperd package, thanks hazmat), but i can't see how we configure the zookeeper adress, for example. [14:13] fwereade: ^ [14:13] rog, the zk address is the default address port 2181 on all ifaces [14:14] rog, hazmat beat me to it :) [14:14] hazmat: so the answer is "we don't"? [14:14] rog, its pretty common for daemons to bind to all available interfaces on their standard port [14:14] rog, yup [14:14] hazmat: ok, that makes more sense now [14:15] hazmat, fwereade: i couldn't work out how ZOOKEEPER_ADDRESS was getting through, one way or the other. there's so much care taken to make it configurable... [14:17] rog, ZOOKEEPER_ADDRESS is different.. its used to pass the ip:port info to agents, its passed on as an env variable to the agent process [14:17] its not used to configure zookeeper but the things using zookeeper [14:17] * rog nods [14:21] marcoceppi, -departed is called when any unit of the related service is removed, -broken is called when the relation is removed [14:21] koolhead11, the ec2 provider is used for openstack, there's an example in the list and on askubuntu [14:23] hazmat: i googled and found one such thread started by jcastro [14:23] but no much info. [14:23] hazmat: juju/control/initialize.py was the confusing bit. ZOOKEEPER_ADDRESS will never be set there, right? [14:23] * koolhead11 goes back to askubuntu [14:27] hazmat, fwereade: or is there another subtlety i'm missing? [14:28] rog, sorry, let me check what's going on there [14:28] if it happens to be set it will be used [14:28] but: [14:28] zk_address = os.environ.get("ZOOKEEPER_ADDRESS", "127.0.0.1:2181") [14:29] does that answer your question? [14:34] rog: ^^ [14:35] fwereade: yes, i saw the default. i just wondered if there was ever an occasion when the default would not be used [14:35] fwereade: and i *think* the answer is never. [14:35] rog, offhand, I think so too [14:36] fwereade: good. i'm not too far off base then. [14:40] rog, the default is not used lots [14:40] rog, the common way the value is passed is via an environment variable [14:40] well at least for non bootstrap nodes [14:40] oh.. sorry [14:41] hazmat: that was my point - this *is* the bootstrap code [14:41] And that's lunch time.. [14:41] rog, indeed its always the default there [14:41] hazmat: cool. a comment (or even removing the env var reference) might help naive folks like me to understand things, i guess. [14:49] hazmat: So I've got a service that requires mysql. I want to capture when that interface goes away. So -broken if I were to remove the relations? Also, what information is passed to the hook? [14:52] marcoceppi, not much, the standard hook information + rel info (JUJU_RELATION), but afaicr its not possible to interrogate any of the remote settings via the hook cli api, because their already gone, the unit can interrogate its own settings though [15:02] I just need to find the hostname of the hook that broke off, is that still available in: relation-get private-address [15:11] marcoceppi, bruno had ftp ready for review didn't he? [15:14] jcastro: Not that I'm aware of, last I heard he was starting on it but I've been in and out all weekend [15:14] he was ready for review as he was asking me how to tag it, but I can't find his branch. :-/ [15:15] Neither can I :\ [15:27] marcoceppi, its not [15:27] marcoceppi, there could be a dozen remote hosts [15:28] depending on the type of relation and which endpoint it is [15:28] hazmat, do you have irc topic powers here? [15:28] jcastro, i do [15:29] hazmat: So I'm creating a phpMyAdmin charm and it's able to join to multiple mysql servers [15:29] I'd like to capture when one goes away and remove it from the cfg [15:30] jcastro, something in particular you'd like in it? [15:31] marcoceppi, are they separate relations or a single relation? [15:31] hazmat, office hours please, something like: Office Hours (1600-1900UTC) [15:31] just tacked on at the end or whatever === hazmat changed the topic of #juju to: http://j.mp/juju-florence http://j.mp/juju-docs http://afgen.com/juju.html http://j.mp/irclog Office Hours (1600-1900UTC) [15:33] Well, that's what I'm a little baffled about. Can you spin up multiples of the same service. Like have two separate MySQL services running independently of eachother within the same bootstrap? Or would you simple add-unit [15:44] marcoceppi, you can have multiple mysql services in an environment, and if you do add-unit you'd be adding units to the existing service, something like mysql doesn't really support multiple units of the same service outside of a master slave setup, which i believe the charm models as two separate mysql services with a master/slave relation, and in that case you can add-unit for slaves [15:48] * hazmat grabs lunch [16:04] it's ironic that the actual observed time to failure for my UPS was less (which happened sometime last night) than actually seeing a power failure. in fact no power failures here in the 7 years i lived here [16:08] jimbaker: at least when your UPS fails you've still got mains power... [16:13] rog, indeed, i do have amazingly reliable and inexpensive power here. time to build a data center in the basement ;) [16:13] speaking of which, there was an article i saw recently about using data centers to heat buildings [16:14] in cloudinit.py, add_ssh_key states that at least one ssh key is required. why is that? wouldn't it be ok in theory to have a juju machine that wouldn't accept any incoming connections? [16:14] jimbaker: i've heard that [16:14] jimbaker: mind you, i'm not sure i'd situate a data centre next to peoples' house - they have a nasty habit of burning down. [16:14] s/house/houses/ [16:14] data furnaces - http://www.nytimes.com/2011/11/27/business/data-furnaces-could-bring-heat-to-homes.html [16:15] jimbaker: not gonna be very good in summer though... [16:16] apparently as long as the ambient air temp is < 95 deg F, it should work for the servers, just venting heat outside at that time, instead of inside [16:16] with maybe the exception of one or two days a summer, that would be the case for my house [16:19] jimbaker: it's a nice idea. i wonder how data protection laws would apply to the home owner... [16:19] jimbaker: do you know the answer to the above question, BTW? [16:20] jimbaker: i'm just wondering what would break if we allowed no ssh keys [16:20] hazmat? [16:20] rog, i don't know the answer to cloudinit above. maybe just fix cloudinit for this case? [16:20] worth taking a look at its codebase [16:21] i'm not sure that cloudinit itself requires any ssh keys [16:21] but i may be mistaken [16:21] * rog goes to check [16:21] interesting related point about txaws not verifying ssl. didn't know this. sounds like another reason to move to boto, which does have such support [16:21] boto? [16:22] it's pretty much the standard library for working with aws [16:23] jimbaker: well, i'm using goamz :-) [16:23] jimbaker: when you say "not verifying" do you mean it doesn't check the cert chain? [16:23] rog, unlike txaws, it has extensive support for nearly all of the aws api. its disadvantages are that it is blocking (but easy to workaround with deferToThread, just like what we do elsewhere in the python version of juju) [16:23] rog, that's what i understand from niemeyer' [16:24] s email [16:24] jimbaker: Have you checked if boto is testing for SSL certs? [16:24] rog, the client needs ssh to work to connect the bootstrap node [16:25] jimbaker: Also, it's not an easy transition [16:25] niemeyer, this is based on http://www.heikkitoivonen.net/blog/2009/10/12/using-m2crypto-with-boto-secure-access-to-amazon-web-services/ [16:25] hazmat: so the bootstrap node needs to allow ssh. what about the other nodes? [16:26] rog, they need it minimally to support ssh/scp/debug-hook commands [16:26] hazmat: ok, so nothing crucial relies on it. just wanted to check. [16:26] jimbaker: If you actually read the code there, you'll notice that it's not boto that is verifying the certificate [16:27] jimbaker: If you're going to fix it for boto, you can as well fix it for txaws [16:27] rog, nothing internally no, except for client access to the bootstrap node [16:29] niemeyer, likely it's worth looking at both. again you raised a good point about txaws. at this point, i know that people use boto successfully in this way to verify ssl :) [16:29] hazmat: cool. so "You have to set at least one SSH key." should really be "The bootstrap node requires at least one SSH key. Without at least one SSH key, other nodes will not allow ssh access, e.g. ssh,scp, debug-hook" or something like that? [16:29] txaws has some users, but boto is used extensively [16:29] including with twisted as i understand it [16:30] rog, those other subcommands are part of the interface juju exports [16:31] jimbaker, boto is used by twisted? [16:31] er. with [16:32] hazmat: that's true, but they're not key to the infrastructure. i could imagine creating a high-security node that allowed no ingress. it wouldn't break anything to do that. thus "must" seems a bit strong. [16:32] hazmat: but YMMV of course [16:32] hazmat, i have seen boto + twisted in the openstack tests, for example [16:32] rog, by that notion ssh keys wouldn't be required at all post a REST interface [16:33] jimbaker, openstack doesn't use twisted anymore.. and its honestly its not a great example of proper usage of anything imo [16:33] hazmat: well, i guess REST implies some encryption 'cos we'd be using https, so yes, i'd agree. [16:33] its getting better, but the codebase was originally a ball of mud with different styles and mix of sync/async and library usage [16:33] hazmat, this was just in the tests. i know they are using gevent. probably want to find better examples before we decide anything :) [16:34] eventlet .. same difference though [16:34] jimbaker, there are no twisted imports in nova [16:35] hazmat: i certainly think we should default to allowing ssh access, but i'm not sure it should be strictly required. and in fact, i don't see anything that checks the requirement now - it'll probably just work as is [16:36] jimbaker, which tests?.. i don't know that we'll find many examples of twisted code bases using boto [16:37] hazmat, i will try to dig this up [16:37] jimbaker, to throw some counter points 1) boto code base is relatively bad, and barely has any tests 2) most of the coverage of AWS API is useless to juju [16:37] therve, this is in fact a very good counterpoint [16:38] the only thing we can say about boto is that it's heavily used across various impls of the EC2 api. whether or not that makes up, i don't know [16:38] the best thing would be extensive usage in the wild + extensive unit testing [16:39] anyway, just bringing up as a possibility [16:40] jimbaker, in the mean time, I'd be happy to help if there are problems with txaws :) [16:40] jimbaker, i think i missed the context here, we're just talking about ssl cert checking [16:40] hazmat: no, it is checked. [16:41] i believe this in ref to https://bugs.launchpad.net/txaws/+bug/781949 [16:41] <_mup_> Bug #781949: Must check certificates for validity < https://launchpad.net/bugs/781949 > [16:41] which was filed by niemeyer [16:41] I'll assign that to me [16:42] therve, thanks [16:42] jimbaker, we have to be careful there, or at least only do it optionally, openstack setups don't nesc have valid certs [16:43] jimbaker: Time machine.. [16:43] hazmat, sounds good [16:43] niemeyer, what do you mean? [16:44] which was filed by niemeyer [16:44] ... back in May. [16:45] niemeyer, yes, so that's good right? i'm just pointing this out to ensure it's the right bug in question [16:45] jimbaker: Yeah, I'm not saying it's bad in any way.. it was a just a bad joke [16:46] niemeyer, cool [17:02] hazmat, standup today? [17:10] we don't verify the amazon cert right now? [17:12] SpamapS, as i understand it, yes [17:12] thats quite serious IMO [17:12] theft of AWS credentials could be *VERY* costly [17:12] SpamapS, indeed. think of the possible botnets [17:13] just thinking of the possible bill [17:20] So, could I simply symlink upgrade-charm hook to the install hook? [17:24] marcoceppi: don't see why not... as long as you change idempotency guards to handle any re-install logic needed [17:24] m_3: Right, I just realized my install hook is idempotent and since the upgrade never runs the install again it would pretty much be exactly what the install does with the exception of a few things [17:25] marcoceppi: might have to add a couple of things like an initial "apt-get update" that you wouldn't normally need [17:25] Good point [17:30] marcoceppi: lp:charm/ceph is an example of this [17:31] w/o the additional apt-get update :) [17:35] Cool I think this charm is about ready. Need to test it a bit more [17:35] jcastro, this askubuntu thing is great ;-) [17:36] hmm.. i'm not sure if the unzip will respect symlinks or not [17:39] marcoceppi: awesome man! [17:40] koolhead17, you mentioned on askubuntu that you where able to bootstrap without a local ssh key.. afaik this isn't possible [17:40] ie. it should always error out if that's the case [17:41] hazmat: am trying to run juju on my existing openstack infra [17:41] * hazmat nods [17:41] juju bootstrap gave no error [17:41] it was juju status which failed [17:41] with some ssh related error [17:41] hmm.. that's a regression [17:41] and i can see an instance running too [17:41] juju-default [17:42] i was wondering when the ssh access juju tries to acquire [17:42] verbose says with user ubuntu [17:42] what about the password? [17:43] koolhead17, it uses ssh keys everywhere, bootstrap should fail if there are no keys [17:43] before launching an instance [17:43] by default images get password as "password" for user ubuntu and user root [17:43] on the ineiric cloud image [17:43] Oneiric [17:43] hazmat: i created a keypair and then i was able to pass juju bootstrap part [17:43] :D [17:44] koolhead17, juju doesn't reference api key pairs [17:44] hazmat: so what is the best way to get juju running inside openstack? [17:44] i had few other issues too. [17:44] like the instance gets acquired with internal IP [17:45] koolhead17, one moment, just trying to verify the key thing against an openstack install [17:45] hi koolhead17 [17:45] jcastro: hello sir. :) [17:45] i was looking for you [17:45] awesome, I was looking for you, you go first! [17:46] about config related issue while running juju using openstack [17:46] i got it figured [17:46] :D [17:46] Apparently source isn't built into sh? [17:47] jcastro: like last release i would like to work with ubuntu server guide, let me know how can i help. last time i did final revision part [17:48] koolhead17, we could always use help writing charms [17:49] and iirc at some point soonish hazmat will split the docs from juju itself [17:49] jcastro: hmm. i allready have few assigned but i had to work on sumthin else [17:49] the docs could really use a review [17:49] koolhead17, just tried it without a key.. it does fail.. http://pastebin.ubuntu.com/752729/ [17:49] jcastro, i've gotten some push back from that.. it should probably get brought up on list [17:49] ok [17:49] I'll bring it up [17:50] jcastro, cool [17:50] hazmat: i had similar error and what i did was ssh-keygen to generate key for my local user [17:50] marcoceppi, m_3, SpamapS: incoming new charm, teamspeak. [17:50] marcoceppi: dash... Use '. file.sh' instead of 'source file.sh' [17:50] and when i executed ensemble-bootstrap it worked [17:51] its ensemble-status where am failing [17:51] marcoceppi, oh hey was it you that was going to add new bling to the IRC bot? newly tagged "new-charm" announced here for review would be awesome. [17:51] and i know the reason [17:51] koolhead17, right but that error is that bootstrap won't work without a key.. which is different than status not working.. the latter is more that the key didn't get installed onto the instance [17:51] jcastro: just saw it pop up on the review queue [17:51] rock and roll [17:51] jcastro: Yeah, and feed Ask Ubuntu questions into here [17:51] m_3, he's on a roll, he'll probably submit FTP tonight as well [17:51] hazmat: yes the later part where am stuck currently [17:52] does my defualt juju uses user "ubuntu" and passwd "ubuntu" [17:52] to connect to instance [17:52] ? [17:52] koolhead17, can you pastebin the euca-get-console-output for that instance [17:52] koolhead17, there are no passwords just ssh keys [17:52] hazmat: am home now. i will do that once in office [17:52] koolhead17, which release is the image btw? [17:53] hazmat: oneiric cloud 64 bit [17:53] tar.gz file [17:53] koolhead17, k, i'm trying a full run on openstack now [17:53] i use cloud-publish-tarball [17:55] jcastro marcoceppi: autofeeding askubuntu questions here would rock! [17:55] of course, answering them here would be pretty cool too, but that might be... tough [17:56] you can always take a good question and answer that you answer here and post it there [17:56] as a self-documenting thing [17:57] * m_3 is more geeking out about about mup-style integration [17:57] hmm, isn't there a way to have your authentified irc nick be linked to your askubuntu account? some sort of openid mechanism. then there could be a bot here to which we could feed questions and answers. [17:57] koolhead17, reproduced [17:58] hazmat: cool and? [17:58] koolhead17, still debuggin [17:58] am running it on same same nova system [17:58] and added all openstack related credentials in config.yaml file [17:58] mpl: No write access on the Ask Ubuntu API yes, so unless you emulated a web browser, logged in and maintained session for each user, etc (it would become messy quick) [18:00] too bad. [18:01] hazmat: also is there a way to add cloud-init kind stuff in config [18:01] am saying this because when am running juju behind proxy it will fail [18:02] koolhead17, short answer, no, you can put a bug in for http proxy support though [18:02] not sure how that's related to cloud-init re proxy [18:02] hazmat: i was giving an example :P [18:02] so on openstack cloud-init finishes fine, but it doesn't seem to install the key [18:03] hazmat: i have not tried cloud-init [18:03] its the juju status [18:03] where am stuck [18:03] with ssh connection error [18:03] i will paste exact error once am in office tommorow [18:04] hazmat: juju starts an instance with juju-defualt but its not able to ssh to it [18:04] after i execute juju status [18:06] koolhead17, yes, i understand the problem [18:06] and i'm able to reproduce it [18:06] hazmat: cool. :) [18:06] i have 2 more issues which i was concerned about [18:06] by defualt when a instance starts it acquires a private IP [18:06] say 192.168.1.1 [18:07] we attach the instance to public IP [18:07] for communicating with outside world [18:07] * hazmat nods [18:08] koolhead17, i do the same for interacting with our openstack cluster [18:08] now if i will use juju it means i need to have my internal IP connected to internet in order 4 juju to fetch pkgs [18:08] hazmat: but when i will say juju deploy mysql [18:08] it will go in background and do apt-get install mysqlserver [18:08] but since its not connected to internet it will fail [18:08] _mup_ [18:09] what is the way out [18:09] koolhead17, yes, but basic NAT traversal should allow for that, are you saying that the cluster has zero connectivity to the internet [18:09] hazmat: yes [18:09] the VM in internal nw [18:09] :-( [18:09] or that the openstack internal network isn't bridged? .. the fact that youc an communicate at all with the bootstrap node suggests otherwise [18:10] koolhead17, being in an internal network is fine if it has outbound access [18:10] * m_3 needs charm review snippets :) [18:10] hazmat: which means it has to have internet access ? [18:10] :D [18:10] koolhead17, that or an internal apt proxy and image customization [18:11] s/proxy/cache [18:11] same difference [18:11] Which bot is _mup_? [18:12] hazmat: well am talking about 2 things differently [18:12] marcoceppi, launchpad.net/mup [18:13] 1. my cluster should know about proxy-server when am trying to orchestrate via Juju [18:13] 2. about this private IP and its neccesity to have internet connection. [18:13] koolhead17, so it took some time but the key was eventually installed on the bootstrap instance [18:14] hazmat: it failed in my case. :) [18:14] koolhead17, i wonder if it just takes some additional time [18:14] hazmat: you will have to share your yaml file with me then. :P [18:15] juju bootstrap does work without error [18:15] koolhead17, sure, and that means the instance is started, and that an ssh key was found [18:15] its juju status where i fail because of ssh credentials [18:15] hazmat: +1 [18:16] koolhead17, right, i had the same problem, but it did work after i waited a few minutes, i'm still not clear why since cloud-init had finished [18:16] koolhead17, which version of openstack are you using? [18:16] diablo [18:16] from ubuntu repositoray [18:16] koolhead17, i think it has something to do with a delay in associating the public-address to the instance [18:16] koolhead17, because the ssh host fingerprint changed [18:17] hazmat: the "-v" showed its associated with ip and instance ID [18:17] koolhead17, i know.. this is an internal to openstack [18:18] hazmat: what would you suggest and how should i proceed [18:18] it shows the address correctly on the metadata, but the action of connecting the instance to the ip address is derived off an async activity [18:20] koolhead17, offhand i'm not sure outside of waiting, the question is verifying that the ip address is connected to the right instance, which means verifying openstack internal state [18:21] asking on #openstack [18:21] hazmat: can i trouble you tomorrow once am in office [18:21] hazmat: so your saying you are able to run/execute juju status without failing from that ssh related issue [18:22] koolhead17, yes [18:22] koolhead17, i had the issue, i waited, it went away [18:22] marcoceppi, I forgot a totally obvious service but added it to the spreadsheet, status.net aka. run your own twitter [18:22] hazmat: one more thing [18:22] that would be useful for organizations that want the benefits of microblogging but for internal reasons [18:22] does juju status results in connecting and getting some info from externa;l metadata server [18:23] jcastro: oh, duh! good call [18:24] koolhead17, it talks to the nova api endpoint not the instance metadata server [18:24] koolhead17, cloud-init does talk to the instance metadata server [18:24] but that's independent of status [18:25] hazmat: my internal nw has no internet in it. :) [18:25] can that be the reason 4 fail [18:26] koolhead17, possibly.. i find that likelihood very strange though.. inbound access and outbound access via a NAT are different concerns [18:27] hazmat: i have to read much further and find some more things :) [18:56] m_3: ping [18:56] negronjl: yo [18:56] ssup? [18:57] m_3: I just updated and tested the new mongodb charm with replicaset in oneiric but, I need another set of eyes to ensure I am not crazier than usual. When you get a chance ( no rush ) can you test it. [18:57] ? [18:58] m_3: In the meantime, can you tell me ( again ) where your hadoop charm for oneiric is ? I am going to consolidate them all into one charm . [18:58] negronjl: sure thing on reviewing the mongodb charm [18:58] negronjl: lemme look to see that the latest is in trunk on the hadoop charms [18:59] m_3: ok on the hadoop thing ... [18:59] m_3: let me know if you need help testing the mongodb thing ( commands and such ) [18:59] speaking of oneiric ... [18:59] seems like we missed a huge set of work items at UDS [18:59] SpamapS: do tel [18:59] which is.. develop or coopt tools to do releases [18:59] SpamapS: do tell [19:00] SpamapS: tools like charm create you mean ?? [19:00] Like, we need to copy all of the branches from oneiric -> precise [19:00] and automatically backport new stuff from precise -> oneiric [19:01] negronjl: lp:charm/hadoop-{master,slave} have everything for oneiric [19:01] m_3: thx [19:01] negronjl: the hard part is making sure we have a single ppa or other repo that is consistent across natty and oneiric (we don't currently) [19:02] negronjl: ppa:canonical-sig works for natty... ppa:mark-mims/hadoop works for oneiric [19:02] m_3: To test the sanity of the "one hadoop charm to rule them all" theory, I'll start by if,then on the release to figure out which ppa to use... I'll work on the one ppa later. [19:02] SpamapS: We can still do this but, I think it'll have to be done manually at first [19:02] negronjl: I was planning on building all the dep projects into ppa:mark-mims/hadoop for natty, [19:03] m_3: The natty one is already in LP ... let me get it for ya [19:03] negronjl: but we should get the right "partner" repo working correcly [19:03] for both! [19:04] negronjl: ugh. I really don't want to get into a situation where we are manually doing anything other than merges of conflicting changes [19:04] m_3: that shouldn't be hard as long as both packages work the same [19:04] negronjl, did you publish the new charm to lp? [19:04] negronjl: there's a blueprint for packaging the whole bigtop stack BTW... [19:04] negronjl, re mongodb [19:04] hazmat: i did [19:04] hazmat: lp:charm/mongodb [19:04] negronjl, where ? ah.. it needs to be under charm/oneiric/mongodb for the store or charm browser to find it [19:04] ie. its missing the series [19:05] er.. actually... charm/oneiric/mongodb [19:05] hazmat: I just updated the lp:charm/mongodb so, if you saw it an hour ago, you should see it just fine now. [19:05] hazmat: I'll double-check the placement in LP ... please hold [19:05] negronjl, maybe i'm missing something.. i see charm/thinkup [19:05] and oneiric/postgresql [19:05] hazmat: https://code.launchpad.net/~charmers/charm/oneiric/mongodb/trunk [19:06] SpamapS: is the backport process anything more than commit hooks atm? [19:06] negronjl, can we catch up this week wrt. the charms you are working on? I need a sync up. (Not high priority) [19:06] negronjl, maybe this was just very recent (less than 15m) ? [19:06] hazmat: more like 3mins ago :P [19:06] negronjl, ah.. that would explain it [19:06] hazmat: you're so 3 minutes ago :D [19:06] m_3: commit hooks would be too automatic.. we need a way to say "I just broke backward compatibility" and stop auto-backporting of a branch. [19:08] SpamapS: I understand that we want to do this automatically but, I for one, have no idea on the all of the implications of this and, by doing it manually once, we should be able to learn from the mistakes that we make, issues, etc. and create a "sane" automated process for this. [19:08] negronjl: so I still have a build VM set up... can dput the packages to a new repo pretty easily I think... just lemme know [19:08] m_3: Let me add you to the existing repo ... let see if it works that way ... hold on [19:09] no merge conflicts => "it's golden, ship it" ;) [19:09] negronjl: maybe the answer is to open the precise series, without actually making it the dev focus.. so lp:charm would still be oneiric until precise releases.... [19:10] meh [19:10] see we need a face to face for this [19:10] SpamapS: G+ ? [19:10] negronjl: it might be best to _start_ with your plan of conditional repos based on series... get one charm to rule them all... then get one repo to rule them all as a next step. gets us to working state soonest I think [19:11] Maybe we should drop the notion of the series altogether. If the charm is in lp:charm, it needs to work on all supported Ubuntu releases... [19:12] m_3: I'll take a look at it either way and see where the path of least resistance is [19:12] and if we just can't do that for some release of Ubuntu.. then we can create a series-specific branch for that charm and that release. [19:12] SpamapS: I think that's the ultimate goal but I also think that the lp:charm charms will need to be heavily reviewed ( modified ? ) by us to ensure that we work all of the kinks out [19:13] negronjl: s/reviewed/tested/ [19:13] Thats where the automated testing bits come in [19:13] Does the db-admin relation work? [19:13] on the MySQL charm [19:14] marcoceppi: It does ... I've used it [19:14] * m_3 fantasizes about automated testing... [19:15] * SpamapS tosses a bucket of cold water on m_3 [19:15] focus! [19:15] marcoceppi: haven't reviewed/tested your changes yet if that's what you mean [19:16] * m_3 didn't say "humps the leg of automated testing..." [19:17] marcoceppi: ahh ... didn't know if you were asking about changes that you may have made to the mysql charm ... I haven't tested any changes but, in the not so distant past, I have used the mysql-admin interface. [19:17] marcoceppi: ... and that worked. [19:17] I wasn't nessisarily, all I did was change the metadata.yaml file to give db-admin a mysql-root interface [19:17] since db-admin and db are the same hook file [19:18] marcoceppi: I'll be working on that charm soon ( to use it for CloudFoundry as opposed to a cf only one ) so, if I find any weirdness, I'll let you know [19:18] I just got a state error when adding a relation to db-admin, was looking for an excuse to not be my fault [19:20] marcoceppi: lol ..... aren't you the new guy ??? then it's always your fault :) [19:21] hum, does db-admin need to create a database? [19:21] m_3: https://launchpad.net/~canonical-sig/+archive/thirdparty <---- natty hadoop packages [19:21] marcoceppi: i don't think so, it should be for you to get root to a db [19:21] Cool, I'll update that as well [19:21] marcoceppi: ahh ... I now remember why I made another charm for mysql [19:22] marcoceppi: mysql-admin gives you root to a db only ... CloudFoundry needs root to it all [19:22] what. what is the point of that? [19:23] I guess there were more changes needed than I though [19:23] t [19:24] marcoceppi: the point of what ? [19:24] having a db-admin if you don't get an admin account? [19:25] negronjl: thanks... I'll try a dput and let you know [19:25] marcoceppi: you get admin of a particular DB [19:29] negronjl: so there were lots of other repos I had to add b/c of build deps (listed http://goo.gl/n5T2i) [19:29] negronjl: those'll have to be added as well for oneiric [19:29] m_3: ok ... hit it [19:29] * negronjl crosses fingers and prays that it works [19:30] * m_3 does too [19:36] <_mup_> Bug #897360 was filed: Separate docs from source < https://launchpad.net/bugs/897360 > [19:37] niemeyer, any thoughts re my comments on GC [19:37] hazmat: haven't read them yet [19:42] negronjl: dputs rejected: "Signer has no upload rights to this PPA." [19:42] m_3: what's you lp username ? [19:42] mark-mims [19:45] m_3: the group belongs to zaid_h and I don't have access either ... I'm trying to get you access .... [19:46] m_3: if that doesn't work, maybe we can do it the other way around and I'll dput the packages from canonical-sig into your ppa [19:47] hazmat: Didn't get anything in my inbox regarding the issue? [19:48] niemeyer, interesting i don't see it on the mp [19:49] oh.. i have to setup my postfix post ssd install [19:49] doh [19:49] negronjl: I'll have to clone up a new natty instance and rebuild the packages there... [19:49] i was wondering why i haven't gotten any responses to emails recently [19:50] hazmat: Cool, np [19:50] hazmat: I'm stepping out for some exercising, but will check when I'm back [19:50] niemeyer, cheers [19:52] negronjl: lemme do other charm stuff while waiting for Zaid...if he doesn't get back later today I'll spin up the natty builds. rather not have to go down that rabbit-hole if we don't have to. Do you need this now for anything? [19:52] m_3: not at all .. no rush [19:53] cool man... thanks [19:55] hazmat: your mail from last week is being delivered [19:55] m_3, yeah.. just flushed the system [19:56] used a clean install on my new ssd forget to setup my postfix properly.. but its pretty rockin, the ssd that is [19:56] m_3: no rush ... you got access: https://launchpad.net/~canonical-sig/+members#active [19:57] negronjl: thnks [19:58] * negronjl is out to lunch [20:49] <_mup_> juju/expose-refactor r421 committed by jim.baker@canonical.com [20:49] <_mup_> Merged trunk [21:00] SpamapS: call time? [21:06] m_3, hey so, no worries [21:06] but another incoming charm. :) [21:20] If someone has time to review https://code.launchpad.net/~marcoceppi/charm/oneiric/mysql/db-admin-relation-fix/+merge/83690 I'd be greatful [21:31] jcastro: cool... on it [21:35] Was there ever a decision on what to do about cryptographic/source checks when being checkedout from a remote repository (git, svn, etc)? [21:50] marcoceppi, i think if its via a secure channel it was fine [21:51] niemeyer, can lbox setup the reitveld for an existing branch? [21:52] * negronjl is back [21:53] negronjl: I think that the ppa:canonical-sig/thirdparty is going to work [21:53] negronjl: it's still building, but it looks like it'll succeed [21:53] m_3: cool ... it should make things simpler :) [21:53] much [21:53] we'll need to test it tho [21:57] negronjl: ok, it finished [21:57] m_3: I'll start testing a bit later [21:58] cool... me too [21:59] m_3: I'm changing the hadoop charms so you can specify your own MapReduce job ... I am building on top of yours. I'll share later when I am done testing but, it should make charm usable with any MapReduce job that can be downloaded [22:04] negronjl: yeah, I was going to talk to you about that... it was built for the demo but should totally be generalized [22:05] negronjl: config has lots of specific stuff too [22:06] m_3: I think I am going to follow your logic of having a script .... Maybe in the config.yaml file, I'll just have a var that is a script that will do everything that is needed. Each job is pretty unique and I don't think I'll be able to generalize enough to make it work for a big chunk of jobs out there. [22:06] negronjl: perhaps a url for pulling the MR jar even? [22:06] m_3: I'll have an example and we can all work on it from there .... [22:06] negronjl: we can also remove the job-related scripts too... whatever makes sense [22:07] m_3: removing the job-related stuff ... sure ... I am also toying with the idea of creating a new interface that will provide the job ... not sure how yet but, it would give us the hadoop charm and then we can have something separate ( a MapReduce charm ?? ) that will actually execute the job ( or provide the necessary stuff so the master can execute the job ). [22:08] m_3: many ideas to play with ... I'll do some stuff and we'll see where I end up. [22:08] m_3: if you have ideas, let me know too :) [22:08] cool... happy to help, just lemme know [22:08] maybe g+ sometime after the first pass [22:09] m_3: will do [22:12] What does one do if they don't have an automated install/config of HDFS/Hadoop ? [22:12] ssh in and scp your .jar's manually? [22:12] SpamapS: yes [22:12] interesting [22:12] SpamapS: but, the majority of people that use MapReduce often have scripts that doe this stuff [22:13] SpamapS: My idea is to provide a mechanism by which people can put their scripts in a charm ( interface or otherwise ) so, they can use the charm to deploy hadoop but still have their scripts to run their jobs [22:13] SpamapS: IMO it's a more realistic use of the charm [22:13] negronjl: very similar to a capistrano integration with a charm [22:14] m_3: not sure about that ... I have to options that I am about to investigate : [22:15] m_3: 1. In the charm, provide a variable where the charm will download and execute a script ( dangerous ?? ) [22:15] m_3: 2. Provide an interface where, upon relation, will do the same [22:15] m_3: two sides of the same coin ... they both will end up executing arbitrary code [22:15] I had always imagined option 1, but that doesn't really mean anything... just haven't thought too much about it [22:16] can totally see long-running hadoop services [22:16] m_3: Option 1 is probably the easiest to implement but, what I really want to do is completely decouple the charm from the job [22:16] and short-running job services that attach, run, then detach [22:17] m_3: Ideally in option 2. I can clean up the job lef-overs upon breaking the relationship [22:17] that's a new angle too btw... good to blog about [22:17] m_3: this way the same hadoop cluster can be used for multiple jobs [22:17] negronjl: yup [22:17] m_3: It all sounds cool .... IF WE CAN GET IT TO WORK :P [22:17] * m_3 grins [22:28] marcoceppi: yo [22:28] m_3: hey [22:28] hey [22:29] marcoceppi: I assume the phpmyadmin package is broken? [22:29] I guess? [22:29] brunopereira81: hi [22:30] m_3: I opted to use source, because source is cooler than apt. And it appeared there was an issue with deb-conf [22:30] marcoceppi: ah, yeah that's what I was wondering [22:31] Is that okay? I thought charms kind of favored source over debs since the repo tends to lag behind a little [22:32] it's okay for a charm [22:32] charms should prefer packages over source [22:32] for all sorts of reasons [22:32] I'd say if the package works, use it... otherwise pull from source if the package is broken [22:33] depends on what you want with the charm though [22:33] for charms in general, anything goes [22:33] for charms in lp:charm that we're committing to maintain, etc, etc... [22:34] gotchya [22:34] SpamapS: you have an opinion on this? i.e., pulling from source when a package is available [22:34] I've been setting up RSS feeds for each of the upstreams that I've made charms for. So I knew when to update the charm [22:34] sorry I was making a red bull run to 7-11 .. reading [22:35] marcoceppi: cool, yeah, that's important... especially if they're not publishing hashes and we're hashing them ourselves [22:35] yeah [22:35] * marcoceppi shakes fist at mojang [22:35] in general, repo > ppa > source because you have better integration and testing [22:36] condition for acceptance into lp:charm? [22:36] repeatability and providing updates is a tough one there [22:37] if your charm does a source install, it needs to make it easy to update the software. [22:37] marcoceppi: does the package work in this case or is it really broken? [22:37] I was thinking actually that we could do that with a convention of having source-version as a config key [22:37] SpamapS: Should that be implemented in the charm-update hook? [22:37] upgrade-charm, imo, should almost always just call install ;) [22:38] m_3: I haven't tested personally for the deb-conf errors. The package installs for local mysql [22:38] so, default: would be the version you tested it with.. [22:38] wow, source-version config key implies a lot of boilerplate work in every charm [22:38] m_3: teamspeak3 installs and runs on local repo, have some time to debug that part? Rest of sugestions will be implemented and updated asap (thx for the review) but at the moment the "not spinning up" is my main prio I would say. [22:38] or if there are security updates upstream, the version with the updates. [22:38] m_3: charm-helper :) [22:39] SpamapS: right [22:39] brunopereira81: sure... sounds good [22:39] . /usr/share/charm-helper/sh/updates.sh [22:39] actually [22:40] . /usr/share/charm-helper/sh/config-source-version.sh [22:40] brunopereira81: gimme a sec to finish up this review [22:40] this is why I'd say just use packages [22:40] SpamapS: makes sense [22:40] if the phpmyadmin package isn't behaving properly, then its buggy and we should fix it [22:40] I [22:40] I'll take a look at it and update the charm if needed [22:41] I recall there being issues with dbconfig-common and the way it works... [22:41] because you have to have a valid db connection before it finished configuring.. [22:44] marcoceppi: the charm iself doesn't have to do anything to update the software.. but there needs to be a prescribed, single way to update/patch the software [22:45] And since packages are *built* to do that, using a PPA (owned by charmers) seems the logical way to get that done. [22:45] SpamapS: Well if install hook always has the the method of getting the latest version (whether via update/upgrade or by source/compile) having it be idempotent and having upgrade-charm call it makes sense [22:45] Okay, so that statement really is more on the side of source/compile [22:46] marcoceppi: I don't like the idea of it always getting the current version [22:46] marcoceppi: that means you change behaviors whenever upstream releases. [22:46] and its entirely possible the PHP or something else that you have stops working. [22:47] marcoceppi: so, a default version, with a way to override it, makes sense. [22:47] As in a config hook? use_upstream [22:47] are config hooks made available in the install hook? [22:47] perhaps separate charms? phpmyadmin-latest -vs- phpmyadmin [22:48] marcoceppi: config-get is always available, so you can just call config-changed [22:49] Well, when I say current (latest) I mean the latest available in the install hook. so not latest.tar.gz, but x.y.z.tar.gz If a new release is available then the maintainer of the charm should update the install hook, test, then upload [22:50] then upgrade-charm hook should be written in a fashion that either creates an upgrade path, etc [22:50] I just talked myself out of an argument. Because this is starting to sound like a PITA [22:51] So, in this case, phpMyAdmin package is broken. I should repackage in a PPA owned by charmers? [22:51] marcoceppi: ;) [22:51] marcoceppi: you just reinvented dpkg!! [22:51] well done [22:52] * SpamapS is always reinventing things that have existed for years.. its like a game [22:52] heh, I'm really good at it :) [22:53] marcoceppi: I don't think the packaging is broken.. we just need to put some thought into it. [22:53] I'll take a look at it again when I get home [22:53] * marcoceppi heads home [22:53] marcoceppi: the long standing bug with dbconfig-common is that you must have mysql running before you can enter config details about your mysql connection to phpmyadmin.. [22:54] Couldn't you just setup a local MySQL instance, fake a complete setup, then manually inject settings to config.inc.php? [22:54] marcoceppi: it fails configuration otherwise. So.. the answer to that is, don't let dpkg configure phpmyadmin until you've related to a mysql server. [22:54] As it stands now, the charm can accept multiple db-admin relations if you have multiple mysql charms running in an environment [22:54] I'm not sure how dbconfig-common will handle that [22:55] phpmyadmin only allows one database right? [22:55] nope [22:55] you can setup multiple server relations [22:55] well then dbconfig-common is probably the wrong solution to configuring phpmyadmin [22:55] marcoceppi: head home, I'm going to look at the package now.. it sounds like we may be able to just ignore dbconfig-common entirely. [22:56] SpamapS: \o/ [22:56] as much as I despise phpmyadmin.. its a godsend for some people. :) [22:56] yeah [22:56] I think it'll be nice for juju, if you want to spin up really quickly to investigate mysql :) [22:57] we should support the shared-db relation in the charm too [22:57] * m_3 likes that it can be spun down just as quickly :) [22:57] so you can give people a web instance to inspect a single database [22:58] yeah, there's real use for that case [22:58] SpamapS: Good point, I'll take a look at that too, tonight [22:58] with a read-only config option? [22:58] * marcoceppi heads to the metro [22:59] marcoceppi: later man... charm looks great in general [23:00] m_3: that would be cool [23:03] m_3: after deploy state: started and I can connect to the server with the client and use the pre-set admin token, restart service is implemented and metadata is fixed on next push [23:03] m_3: any idea where it might have goten stuck at? [23:09] marcoceppi: tried to capture most of these suggestions into the review [23:10] brunopereira81: ok, lemme back up and recycle my env [23:10] ;) [23:10] * m_3 needs more than 2 ec2 accounts [23:26] m_3: We're gonna have a hadoop-mapreduce charm [23:27] m_3: This charm will be in charge of the job ( setup, execution, reporting, cleanup, whatever ). [23:27] m_3: After this, we can then deploy another mapreduce charm to the same hadoop cluster and have it work just fine as the previous mapreduce charm should have cleaned everything up. [23:28] negronjl: I like it [23:28] m_3: After a while, I'll be working up multiple mapreduce jobs per cluster [23:28] negronjl: multiple same time? [23:28] ha [23:28] m_3: Not that I've ever done that but, I figure I would try anyway :) [23:28] really fits in nicely with the whole HaaS thing [23:28] HaaS ?? [23:28] at hadoopworld, there wer plenty of big folks [23:28] like jpmorgan [23:28] Hadoop as a Service ? [23:28] who have a hadoop services group [23:28] yeah [23:29] m_3: cool.... we'll be able to easily provide that :) [23:29] they provide hadoop services to different business units [23:29] m_3: I just need to make a new mapreduce job ... terasort is getting kind of old [23:29] drove hdfs security patches and similar [23:29] m_3: any ideas ? [23:29] I bet! [23:30] the cisco talk had a bunch of cool benchmarks [23:30] lemme dig it up [23:30] brunopereira81: ping [23:30] m_3: any idea where I can get the text for the bible ? [23:30] hmmmm... nope [23:30] m_3 found it? [23:30] m_3: maybe I can use that as in_dir and mapreduce something [23:30] probably part of the gutenberg project :) [23:31] brunopereira81: was PMing you [23:31] <_mup_> juju/expose-refactor r423 committed by jim.baker@canonical.com [23:31] <_mup_> Renamed juju.state.expose to juju.state.firewall [23:31] brunopereira81: so my ts3 daemons aren't starting [23:31] you have a 'service stop' at the end of the install hook [23:32] but a 'service start' in start hook [23:32] what should be the initial state of the service? [23:32] m_3: thx got it. [23:33] negronjl: figured it'd be kinda funny if the gutenberg proj didn't have a bible [23:35] crap reading about terasort reminds me that I 'm still supposed to record a screen cast of the hadoop thing [23:35] * SpamapS *HAAAAAAATES* screen casting. :-P [23:36] SpamapS: What is your deadline on that ? [23:37] SpamapS: ahh .. does it has to be on the ODS thing ? [23:42] yeah it needs to be the same format [23:42] actually its a lot harder to do with EC2 or even canonistack [23:42] having a local openstack was nice. ;) [23:42] local provider works, but looks weird because there's only 1 machine [23:43] * SpamapS ponders setting up an openstack on his laptop to make it look more interesting. [23:43] SpamapS: ahh ... I'm working on the hadoop-mapreduce charm ... thought you could use it but, It'll be done soon enough and I'm sure you'll be dying to make another screencast :P [23:58] <_mup_> juju/trunk r418 committed by jim.baker@canonical.com [23:58] <_mup_> merge expose-refactor [r=bcsaller,fwereade][f=873108] [23:58] <_mup_> Refactors the firewall mgmt in the provisioning agent into a separate class.