[00:00] which ran fine [00:07] SpamapS, are we still marking bugs "fix released" when merged into trunk? [00:07] or should that be "fix committed"? [00:08] jimbaker: I think merged => fix is released [00:08] but I don't know if we have anyuthing special for the juju release process :) [00:10] m_3, sounds good, i'll just wait on SpamapS for the final say here :) [00:29] jimbaker, fix released is what we've been doing [00:29] jimbaker, a separate distro task tracks the other [00:35] bcsaller, others for? [00:36] the lxc stuff and the --test option support [00:36] bcsaller, i'll probably leave the others for review another day, but i can do the subordinates now [02:26] lunch [03:30] ohhh whatcha fixin m_3 ? [03:30] * imbrandon wants extra bacon :) [05:21] I like to buy the world a home , and furnish it with love ... grow apple trees and honey bees, and snow white turtle doves. ./~ [05:23] * imbrandon has no idea why that song , from a commercial that air'd before he was born , is stuck in his head. [05:30] is juju comparable to salt stack ? [05:33] misto: kinda, saltstacks are more infrastructure management, juju orchstrates services and how they interact [05:33] imbrandon: had noodles :) [05:33] so you run juju from your dev box and orchestrate your ec2 ? [05:33] juju is an event engine, that lets you do diffrent things for events , including managing infratucture but also alot more [05:34] m_3: :) [05:34] misto: thats part of it yes [05:34] but not really ec2 [05:35] more about the services your running ON ec2 or RAX etc [05:35] the services are king , not the infrastructure, you dont have to care about that [05:38] I am trying to understand which solution is best to manage entire services stack on amazon web services [05:38] saltstack, juju, or bare cloud formation init scripts [05:39] misto: think about this example "juju deploy myweb" , i dont care that it setup a new user on the db with correct permissions, and made sure the db server was tuned for high load , or that the webserver was configureed correctly or that it can scale to 1000 req a second with one command [05:39] misto: well the awnser is ... yes [05:39] misto: because all of those tools do diffrent things that may somewhat overlap [05:40] misto: but imho to do EVERYTHING as you state juju will be what you may be looking for [05:40] as salt wont do service orchstration and cloud init is too bare [05:41] but like i said its kinda a apple to oranges comparison [05:43] misto: in reality you may end up with something like saltstacks or puppet manifest inside juju charms [05:43] :) [05:43] and that is the part that confuses me [05:43] does juju monitors the health of the instances, kind like cloud watch? and then spawn new instances? [05:44] or it has a recipe that follows to spawn an instance and then the puppet/ saltstack goes from there ? [05:44] it can, it dosent tho [05:44] think of it like init.d for the cloud [05:45] its an event engine you can make do anything really [05:45] I have to see a charm [05:45] check out a few jujucharms.com has links to all of them [05:46] tnz [05:46] s/ tnx [05:46] np, yea its alot to wrap head arround but there really isnt much out there that comparea [05:46] compares [05:47] so its hard to explain sometimes :) [05:48] the part that is appealing is that is company-backed [05:52] :) [05:54] yup juju is backed by canonical and the community both ( in fact I'm community and m_3 is company as far as active on IRC the last hour or so heheh ) although that distinction is rarely needed we all work to the same end for the most part [05:56] I came across juju from go [05:56] ahh cool [05:57] yea there was a prentation by gustavo iirc at google io yesterday or day before [05:57] iirc [05:59] yep, friday [05:59] m_3: i think thats a good comparison , you ? juju is a bit like init.d/upstart for the cloud , heh [06:00] is ensemble part of juju ? or is another thing ? [06:00] imbrandon: dunno [06:00] ensemble became juju [06:00] misto: ensemble is the old name of the project.. been renamed to juju [06:00] it was renamed [06:00] gotcha [06:01] m_3: the thinking ( in my head ) is when diffrent events happen like network comming online then upstart fire script or hook [06:02] heh [06:02] but i guess its more than that as there is the analog to dbus talking for relations [06:02] hrm [06:02] heh [06:03] imbrandon: yeah, the key is the interdependency... which I guess upstart has with particular events... never made that connection though [06:03] yea its a bit of a strech,but kinda [06:03] little more of a notion of handshaking and conversation with relations... not just waiting on them [06:03] yea [06:04] i.e., juju has a little more... [06:04] right, kinda what i ment about the dbus backtalk [06:04] but yea then its another piece [06:07] m_3: i dont think bacon and noodles would mesh well :) /me had McDonalds for lunch , gonna regret that one later [06:08] from the faqs: It is not yet ready to be used in production. [06:09] misto: depends on your value of production, Mark prob said it best in his last email ( that I should add to the faq ) but like www.omgubuntu.co.uk is run by juju on EC2 that enjoys 7 million+ pageviews a month [06:10] let me grab a link to how he put it ... [06:11] misto: https://lists.ubuntu.com/archives/juju/2012-June/001722.html [06:14] misto: basically, before using it in production, read all of these bugs: https://bugs.launchpad.net/juju/+bugs?field.tag=security and https://bugs.launchpad.net/juju/+bugs?field.tag=production [06:14] misto: you need to consider them, and workaround all of them before using it [06:15] and omg is a case where the sysops are a team of 1 [06:15] :) [06:21] the bootstrap node need high availability :D [06:21] can you have more than one bootstrap node in different regions ? [06:25] misto: well for EC2, you should have two bootstrap nodes anyway (one in each region) [06:26] misto: its actually not that hard to get two regions talking to one another. Just that there's nothing built in, so you'll have to write your own custom charm to do it. [06:36] SpamapS: i think charm getall should dump the charms into a series subdir, my mode of deployment reciently has been "charm getall /var/lib/charms && mkdir /var/lib/charms/precise && mv /var/lib/charms/* /var/lib/charms/precise && export JUJU_REPOSITORY=/var/lib/charms && juju deploy local:nginx" [06:36] that would remove all that moving crap [06:38] imbrandon: err, charm getall will put them wherever you tell it to.. so... [06:39] mkdir -p /var/lib/charms/precise && charm getall /var/lib/charms/precise ? [06:39] right but i am thingking when ther eis more than one [06:39] like precise and quantal [06:39] then i can still say get all and it actually gets all [06:39] not just the currecnt series :) [06:41] that and if the .mrconfig is in the precise dir [06:41] then juju compalins that it cant make sense of it [06:41] oh thats [06:41] crazy [06:41] get a series at a time please :) [06:41] heh [06:41] imbrandon: the .mrconfig will be ignored by trunk [06:41] k [06:42] that was merged today IIRC [06:42] rockin [06:42] tho frankly mr is crap [06:42] yea i tried to make it get my git repos too [06:42] before mbp left Canonical he offered to hack up a bzrlib thing that used 1 SSH connection to do them all [06:42] it barfed [06:43] i added [something else] git clone https://sdfsdfsdf.git at the end [06:43] it did not like :) [06:44] * imbrandon dident read the mr docs tho for full disclosure, just "tried" it [06:45] i was like ohhh cool .... damn :( [06:46] SpamapS: http://paste.ubuntu.com/1072617/ [06:46] note machine 0 [06:46] that do that to you as well ? [06:48] imbrandon: yes [06:48] same problem unfortunately [06:48] kk [06:48] imbrandon: hp or rax? [06:48] I really want to try rax [06:48] hp [06:48] since theirs is essex [06:48] yea i need to dig out my rax credentials [06:48] and make sure they are still good [06:48] i havent used them in weeks [06:49] SpamapS: is the os provider in trunk now ? [06:52] let's see if I am able to setup tomcat on my localhost with juju [06:53] imbrandon: no I believe it has some rough edges in testing to get right [06:55] k [07:35] * SpamapS wonders what level of hell the demon that designed nagios's config structure came from [07:43] SpamapS: http://paste.ubuntu.com/1072692/ [07:44] catches 503 , 404, etc etc etc [07:44] nice [07:45] you can even specifiy specific ones like [07:45] error_page 404 = @404fallback [07:45] etc [07:56] alright.. hmm.. beginnings of a generic monitoring interface taking shape [07:57] just need a second monitoring implementation to see if its feasible.. hm [07:57] later.. time for sleep [07:58] SpamapS: would the newrelic ones work ? or too much centered on external svc [07:59] ttyl === zyga_ is now known as zyga [11:31] SpamapS, just got activated by rax ostack beta [11:31] took about a week [13:07] Do anyone know about stackops for deploying openstack, compeard to with ubuntu juju charms? [13:09] sanderj_: without putting words in his mouth I think adam_g may have the most insight into that from what i've noticed just hanging round [13:10] imbrandon, seems like he is away. [13:11] could be so, i havent seen him active today, just know he works with juju and openstack both pretty closely [13:11] not that he is the only one, just first that came to mind [13:11] i may help with juju questions but konw next to nothing about stackops [13:11] sooooo :) [13:12] Ah, ok. [13:12] I'm just wondring if there is any downside in choosing stackops.. [13:12] ahh now that i could not tell ya :) [13:12] ok [13:13] there are others here that could though if ya idle long enough [13:13] i'm sure some will pop iin [13:13] Ok, i'll wait for someone [13:13] some days cant get a word in edgewise some days it a bit slow :) [13:14] but yea there are a few arround that should have atleast a little insight [13:38] cjohnston: heya [13:39] yo [13:39] wanna try the openstack provider on RAX ? got a shiney new nginx charm ( that should match what we setup the other day manyally ) [13:40] just pop in some creds and bootstrap , deploy nginx with juju take a few copy and pastes for our notes and then use it if you want if not kill the env [13:40] :) [13:40] maybe sometime later? I'm in the middle of a few things right now [13:40] sure sure [13:40] when you got time hit me up [13:40] sanderj_, so there are many vendors with their own ostack distribution, doing that to me at least means getting away from upstream and becoming dependent on the vendor. the juju charms track upstream closely, we perform per upstream ostack commit testing on multi-node bare metal with openstack. i haven't used stackops so i couldn't really stay much about them, outside of it looks like they have their own distribution. it also doesn't look like they don't docum [13:40] ent their product offering or pricing so rather hard to say. if you want a commercial install setup, canonical sells a fixed price jumpstart for a 20 node installation... really depends on what your looking for, free and easy to install, commercial support, commercial features, automated mass installer, custom consulting, etc. [13:44] hazmat, I read somewhere that someone doubted stackops will be able to release security upgrades just as ubuntu will.. every 6 months. [13:44] But that's a wild guess I belive. [13:45] well ubuntu releases security updates as needed, not just every 6 months [13:46] we have a new stable release every 6 months tho [13:46] and newer releases of openstack specifically will be available/supported on precise/12.04 LTS [13:47] ( that means 5 years of security support minimal ) [13:48] time for breakfast, bbiab [13:48] btw moins hazmat [13:52] sanderj_: adam_g is on west coast time, he should be around in a few hours [13:55] Seems like stackops is based on ubuntu. === benji___ is now known as benji [14:47] sanderj_: stackops looks like a wizard in front of some other technology like Cobbler or MaaS.. [14:49] sanderj_: one advantage you get w/ juju+maas+ubuntu is that the entire thing is open source and developed in concert with the community.. I don't know if stackops shares that [14:50] SpamapS, there is one guy in #stackops so it can't be that huge community. [14:51] looks like its a django app [14:52] with some tight integration into horizon somehow [14:55] Hmm... intresting. [14:56] Its fascinating actually [14:56] So you just boot up all these boxes.. [14:56] hit them in your browser.. [14:57] and they redirect you to stackops.org to configure them [14:58] sanderj_: Juju+MaaS is still undergoing a lot of development and growing pains.. right now they both have issues, but the juju approach at least seeks to *try* to let you work in a self contained manner. [14:58] sanderj_: and by "they both" I mean juju and stackops [14:59] uhm, i have juju machine agent pegged at 100% cpu, anyone seen this? [15:00] sidnei: yes [15:00] sidnei: kill it [15:00] sidnei: bug fix is coming soon.. basically just destroy that env [15:01] sidnei: bug #1006553 [15:01] * sidnei un-green-ly destroys the environment [15:01] <_mup_> Bug #1006553: local provider machine agent uses 100% CPU after host reboot < https://launchpad.net/bugs/1006553 > [15:05] SpamapS, I'm not sure.. but after some reading.. it seems like stackops still is running ubuntu 10.04 [15:06] sanderj_: that seems like a wise choice for the next couple of months. 12.04.1 will have quite a few bug fixes. :) [15:07] AH, ok. [15:10] sanderj_: though I'd hope their beta product would move forward to 12.04 [15:10] sanderj_: thats one thing they're going to have a hard time with, if they only ever track the LTS's.. they won't get the incremental bump every 6 months. [15:10] still its a really interesting product [15:24] hm, I think I may have found a bug [15:24] you can't store yaml in relation settings [15:25] http://paste.ubuntu.com/1073239/ === zyga is now known as zyga-food [15:48] SpamapS: hey can you explain the workflow for putting the openstack provider in 12.04? When hazmat lands it, will it be SRU'ed or will it come with the next milestone of juju for 12.04? [15:51] jcastro: there is no next milestone of juju for 12.04 [15:51] jcastro: SRU's are for serious bugs [15:51] jcastro: it will land in the PPA, and I think we will land a "stable PPA" in the next few weeks. [15:51] ugh, really [15:51] jcastro: we can also go with precise-backports [15:54] hmm, in hindsight we should have figured out a way to add providers in the stable release [15:54] juju-openstack or something [15:58] jcastro: so with pyju destined for retirement, the best approach is to push folks towards a PPA....the more stuff bolted on to pyju the messier things get [15:59] yeah it just sucks that we're only like 3 months past release and what's in the archive is basicallly grrrr .... [16:03] m_3 and I argued for a plugin architecture from the beginning [16:03] jcastro: the thing in precise-proposed is great [16:03] jcastro: we need to finish verification of it actually [16:04] <_mup_> Bug #1020635 was filed: cannot store yaml in relation settings < https://launchpad.net/bugs/1020635 > [16:05] SpamapS, did you try charm format 2 re bug 1020635? [16:05] <_mup_> Bug #1020635: cannot store yaml in relation settings < https://launchpad.net/bugs/1020635 > [16:05] jimbaker: no, let me do that, but I doubt it will matter [16:06] SpamapS, there is extensive testing of yaml for relation settings, so i would expect it should work [16:06] jimbaker: so thats new only for format 2? [16:07] SpamapS, correct [16:07] SpamapS, you need to specify it in the charm itself, using format: 2 [16:07] jimbaker: I'm using precise-proposed so it doesn't exist yet in that one. Moving to PPA [16:07] SpamapS, ok [16:08] jimbaker: btw is there any reason bcsaller is trying to fix the natty/oneiric failures when it was your commit that broke them? [16:08] oneiric/natty are still on r543 [16:08] SpamapS, i think it was purely the fact that i was sick last week [16:08] ahh [16:09] tho it looks like precise/quantal are stuck on 546 [16:09] PPA is in bad shape [16:09] jimbaker, do you want to take over from bcsaller on that one? bcsaller did you make any head way on that one? [16:10] hazmat: I didn't, the delta looked fine to me and the code seemed to run in isolation [16:10] hazmat, i can do that, although it's still not clear to me how to reproduce [16:11] I can't reproduce it even in an oneiric/natty chroot... [16:11] like bcsaller, the code seemed to run just fine in isolation [16:11] hmm [16:11] but I suspect it is a timing bug [16:11] something in natty/oneiric goes slow, or doesn't handle a race properly (older twisted maybe?) [16:11] even when i ran it on a small oneiric instance, in which case lots of other stuff did fail in the tests [16:11] this however was just json marshalling [16:11] just not the format stuff [16:11] SpamapS, not likley [16:12] this is simple string matching [16:12] correct, there is nothing async here [16:12] jimbaker: other things failed? [16:13] is it possible those failures were async and manifest by screwing up state in a way that bleeds into this test code? [16:13] SpamapS, i didn't make a note, but i did see various failures, apparently based on resource constraint [16:13] small should be able to handle the tests.. thats odd [16:14] SpamapS, when tests fail, they can certainly bleed into other ones [16:14] jimbaker, only if the test is broken [16:14] failures should not cascade [16:14] if the test isn't properly yielding or cleaning up.. then its broken [16:15] even if it doesn't fail [16:15] hazmat, these are good points. again, i didn't attempt to diagnose this particular case, i just noticed the failing tests when i did this yesterday. worth repeating [16:16] yeah.. and capturing [16:22] 'morning all [16:24] jimbaker: format 2 does not help [16:25] SpamapS, hmm, well at least that's a useful data point [16:26] jimbaker: I'm digging into the zk tree now [16:27] pretty sure its just a case of needing to escape input when building the topology node [16:27] workaround is to just base64 encode the yaml [16:28] SpamapS, if it's a string that you want to interpret as a binary string, it should be b64 encoded (per yaml) [16:28] no its a string [16:28] SpamapS, this is supported for format: 2, and tested [16:28] relation-set should take *ANYTHING* [16:29] except perhaps nulls since we're passing in via cmdline args so null termination is necessary [16:30] SpamapS, i'm not certain what that means in practice, because of encoding issues [16:31] well the docs need updating then, they're vague [16:31] format: 2 does change that interpretation that it actually works w/ yaml [16:32] so you can specify any yaml input and it will be faithfully preserved as such upon a later relation-get [16:32] yeah I suspect this is something else [16:34] SpamapS, definitely appears to be the case [16:35] * SpamapS *curses* the useless backtrace === izdubar is now known as MarkDude [17:20] rumour has it, there is openstack provider I can do heavy testing of [17:20] on a cloud. [17:20] mgz: hazmat: ^^^ [17:21] * xnox has a lot of cloud to run juju on ;-) [17:21] so xnox wants to try rebuilding the archive in HP Cloud, I figure it's a good time to bang on the provider while we have him here? [17:22] xnox, hpcloud has some issues, i believe the worst case is though is you have to shut off machines by hand. its at lp:~gz/juju/openstack_provider [17:22] xnox: provided you're happy using the tools to cleanup manually if neede... what hazmat said [17:22] hazmat: ok. I will run it and i'll fiddle with it ;-) [17:23] I need to integrate a couple of fixes for HP, but I'll do that now and poke you to pull [17:26] mgz just sent out review round2 fwiw [17:28] thanks hazmat [17:33] xnox: pushed the changes you'll need [17:34] mgz: cheers [17:37] for config, you need to set in environments.yaml - {type: openstack, default-image-id: (an image as returned by `nova image-list`), default-instance-type: (1-5 per `nova flavor-list`), juju-origin: lp:~gz/juju/openstack_provider} [17:39] ok [17:41] and have in environment OS_USERNAME etc [17:41] I'm off out for a bit but will be around later. [17:41] mgz: same here. Off to go home ;-) [18:42] jamespage: ping [19:03] jcastro, pong [19:03] hey so since there's only one thing in the queue for tomorrow [19:03] I was wondering if you could investigate bug #1020691 [19:03] <_mup_> Bug #1020691: Charm doesn't work at all < https://launchpad.net/bugs/1020691 > [19:05] jcastro, sure [19:06] jcastro, where is the branch? [19:06] * jamespage goes to look [19:06] lp:~charmers/charms/precise/ubuntu/trunk [19:09] jcastro, ta [19:10] weird - I'll talk to folk tomorrow.... [19:10] jcastro, BTW europython juju presentation went weel [19:10] oh man I totally forgot about that [19:10] good to hear! [19:11] standing room only and **loads** of questions at the end [19:13] that's really excellent [19:13] any pics by any chance? [19:19] jamespage: also, we found a typo in the hadoop thing in the flyer [19:19] jcastro, did you? [19:19] so either the charm changed or we messed up [19:19] what was it? [19:19] but we checked it like 4 times so I dunno what happened [19:20] hazmat: the last line right? [19:20] juju add-unit -n20 hadoop hadoop-slavecluster [19:20] ah [19:20] I see [19:21] it's ok we're due for reprinting anyway [19:21] * jamespage phew [19:21] anyway - have to go get my flight - until tomorrow [19:21] I was quite looking forward to reviewing the nginx charm [19:21] ho-hum [19:31] jamespage, awesome! [19:32] jamespage, i'm curious to talk post flight to get the question highlights [19:44] m_3: i sent the title rename to michelle, we should be good there [20:32] jimbaker: so I haven't nailed down the exact problem, but I think the real issue is that the yaml is kept *as yaml* rather than embedded as a string of bytes [20:32] I have silly questions: is there an ssh charm which i can reuse to make 50 slaves be accesibly via ssh from the master node and the master nodes ssh key will be generated / distributed by juju to the slaves? [20:33] jimbaker: http://paste.ubuntu.com/1073747/ [20:33] jimbaker: IMO, the yaml underneath 'monitors' there should be escaped [20:33] xnox: actually [20:33] xnox: I made an attempt at an MPI charm for john the ripper.. [20:34] xnox: in which the master generates a key and installs it on all of the slaves [20:34] xnox: lp:~clint-fewbar/charms/precise/john/trunk [20:34] SpamapS, instead of being stored as a map [20:35] SpamapS: let me see. I'm guessing it's not a charm but hooks [20:36] SpamapS, i still don't see how that issue then becomes a problem with the topology node [20:44] SpamapS: so for example cephs charm has useful ssh interfaces and hooks [20:44] can I reuse those in my package "for free" without copying it's hooks [21:08] xnox: no, we don't have inheritance, but this is about the 10th time in the last month that I've seen a need for it [21:08] xnox: of course, we could package those hooks into a library [21:09] SpamapS: yeah, cause there are plenty of things that talk to each other over http or ssh and it would be nice for $charm-master require to be $ssh-master and it's `units` to be $ssh-slave of their master [21:09] that would be nice. or as suborinate service [21:10] in some-cases you would want for all units to be able to talk to each other - e.g. shared memory computations [21:10] but in most one2many, aka 1 master and Many slaves should be sufficient [21:11] subordinates are a bit clunky for this [21:11] it works [21:11] but its really not an awesome experience [21:13] SpamapS: source /usr/share/xnox-juju-hooks/ssh.hook [21:15] SpamapS: unless my master node, should be the one I am running juju from.... [21:15] in that case I can ssh into all of them [21:16] xnox: I don't really know what you're trying to say. :P [21:16] SpamapS: create an beffy server [21:17] SpamapS: login [21:17] SpamapS: juju deploy 100 slaves [21:17] now using juju describe/status whatever simply start executing parallel workflows from a screen sessions [21:18] SpamapS: or is that actually the typical way to use juju, e.g. not from local machine but from a public cloud instance to begin with [21:18] xnox: sure, just have the master charm set you up a parllel-ssh or capistrano or fabric config [21:19] xnox: thats what the john charm does [21:20] only with .mpd.hosts [23:29] jcastro: cool deal [23:34] .. saltstack