[00:10] woot [01:36] SpamapS, can we kill build recipes for oneiric & natty? [02:24] umm SpamapS / hazmat : ideas on whats wrong ? [02:24] http://paste.ubuntu.com/1101192/ [02:24] 12.04 pretty clean install, nothing strange, juju from ppa [02:53] imbrandon, thats jitsu not juju [02:53] do you have some sort of alias in there [02:53] jimbaker, ^ [03:33] hazmat: yea i ran juju-wrap [03:34] it should be ( and has untill now ) passed through commands [03:34] that were not jitsu right to juju [03:34] something is broken with it [03:35] maybe i'll take that as my queue to propose the code that would do it a diffrent way that i've been contemplating a month [03:35] heh [03:36] hazmat: but to get what i have in the pastebin just "jitsu wrap-juju" then go [03:37] * imbrandon likes the idea of "alias juju=jitsu" better , thats the other way i mentioned [03:38] imbrandon, jimbaker recently added built in help commands, possibly that interferes with wrap [03:39] ahh yea likely, but this does give me said opertunity so i'll dig a little [03:40] fwiw i never used jitsu another way hehe, kinda suprised you hadent noticed the wraper yet [03:40] :) [03:41] but yea a check to see if $0 == juju | jitsu , should be less problmatic anyhow imho [03:41] and thats basicly what i'd want to change it to , then maybe add the wrap-juju command to emulate it for backwards compat [03:42] so it can be added as a symlink like ln -s /usr/bin/jitsu /usr/local/bin/juju; or alias juju=jitsu; [03:42] makes it quite nice ( spoiled by a few other utils that do it that way ) [03:48] imbrandon, i've tried to avoid the wrapper as a bad habit ;-) [03:48] :) [03:49] i cant hardly use git with out the hub wrapper [03:50] git+hub = github intergration for git like , bzr+lp intergration [03:50] and other cli niceities like colors etc [03:52] so stuff like "git clone brandonholtsclaw.com" runs "git clone https://bholtsclaw@github.com/bholtsclaw/brandonholtsclaw.com.git" [03:53] and git push the same , plus added stuff like "git create" and "git pull-request" ( merge proposal ) [03:53] etc etc === almaisan-away is now known as al-maisan === al-maisan is now known as almaisan-away [07:25] Hello ! [07:27] Noob question - got MaaS setup and working, now after juju bootstrap completes (and i wait for the machine to come up) i can't get a connection from juju status ? [07:27] Any ideas ? [07:43] so zookeeper is missing on both my maas server and the juju bootstrap node [07:44] asachs_, can you ssh on the zookeper node with your pre defined shs key ? [07:44] (not that i got an idea how the whole stuff works, but this is what i would check first) [07:44] melmoth: i can ssh about so cloud-init did its job [07:45] juju --verbose status ? may be this will give you more info ? [07:45] melmoth: sounds like the juju bootstrap node runs the zookeeper instance ? [07:45] as far as i have understood, yes. When you bootstrap it starts a node and runs zookeper on it. [07:46] melmoth: had to install zookeeper myself by hand - juju did not install it [07:46] juju -v status gets me : DEBUG Environment still initializing. Will wait. [07:46] hmmm. may be some hint as to wht the install fail are available in /var/log/cloud* logs file ? [07:46] let me have a look [07:49] melmoth: no error and also no indication of installing zookeeper, last line is : handling final-message with freq=None and args=[] [07:50] * asachs_ is stumped [07:50] i have no idea what could be wrong. [07:52] i wonder if i should go bug the guys on #juju-dev === zyga-afk is now known as zyga [08:16] asachs_: how long has it been [08:17] since you told it to bootstrap [08:17] 2 hours [08:17] it looks like the bootstrap command is not installing and starting a zookeeper instance [08:18] there is no zookeeper running on the node it installed ? [08:18] is zk installed ? [08:34] its not installed at all [08:34] its a maas setup [08:35] bootstrap returns success pretty quick - while the hardware boots, not sure what is responsible for injecting software into the newly booted host [08:35] for juju [09:23] https://juju.ubuntu.com/docs/getting-started.html mention "It's also required that the environment provides a permanent storage facility such as Amazon S3." [09:23] i was able ,to bootstrap juju on an openstack where there was no swift. [09:23] so, why is this s3 stroage needed for ? [09:24] (i would not mind about it, but horizon complain about missing s3 catalog entry when i ask it to generare my environment.yaml) === almaisan-away is now known as al-maisan [09:27] melmoth, hi [09:27] hola [09:27] melmoth, the storage is used for 2 things [09:27] melmoth, 1, storage of charms you publish to the environment [09:28] melmoth, 2, a tiny snippet of data in a known location so that the client knows where to find the zookeeper node [09:29] does this mean i will not be able to deploy charm without installing swift and s3 compatibility layer ? and how can it works with simple locl lxc install then ? [09:29] * melmoth is confused [09:30] melmoth, the LXC install just runs a local webdav server for charm storage, IIRC, and it just runs the zookeeper on localhost (so there's no need to look somewhere else to find where it is) [09:31] melmoth, it is I agree something of a shame that ec2-style providers expect s3-style storage, it would be cool if we were to make storage independent from the provider [09:31] so it does mean that if one want to use juju on his private openstack cloud, he needs to install swift as well ? [09:32] (i am assuming swift can act as a s3 storage solution) [09:32] melmoth, yes, that's right, I'm afraid [09:32] Oh. oh. ok. [09:32] melmoth, (although I'm not sure if there are other things that can adequately mimic s3; there probably are) [09:33] melmoth, the important thing is that there's *something* that looks like S3 acessible to both the client and the various nodes [09:33] hmm http://askubuntu.com/questions/132411/how-can-i-configure-juju-for-deployment-on-openstack [09:34] they mention the ec2 provider, whatever that means [09:36] melmoth, that answer is kinda specific to deploying openstack with juju so you can then deploy *onto* that openstack with juju [09:37] melmoth, the ec2 provider is the backend that is designed to work with ec2 but which can also be prodded into a configuration that works with openstack [09:37] melmoth, I know that there has been some work on a native openstack backend provider but I haven't been following it closely [09:38] i have a nova-objectstore running on my manager node, i am assuming this is what this ec2 stuff is about ? [09:38] because, one thing is sure, i was able to juju bootstrap, and try to deploy a charm (wich fails because of http proxy being mandatory in my setting) [09:40] melmoth, hmm, this is interesting: https://code.launchpad.net/~gz/juju/openstack_provider/+merge/110860 [09:40] yep, this match --s3_port and -s3_host i have in my nova.conf [09:40] melmoth, this is merged, and mentions an included fudge that lets you use nova [09:43] so, looks like i have a s3 daemon running after all (wich could explain why i was able to bootstrap) [09:44] melmoth, hmm, so you have an explicit s3-uri in your environments.yaml? [09:45] bingo, the same: s3-uri: http://192.168.122.4:3333 [09:45] well, at least i learnt what this stuff was for :) [09:45] melmoth, a pleasure :) [09:45] Now, i wonder why is horizon complaining about a lack of s3 catalog entry... [09:46] i guess there s some service to define in keystone [09:46] melmoth, most likely, I'm afraid I am not the man to ask about openstack, but you may have some luck in #ubuntu-server [09:47] oh, yet another chan i was not aware of :) [09:47] melmoth, yeah, the list is a bit overwhelming :) [11:07] melmoth, there is native openstack support in juju, and yes nova-objectstore counts as s3 [11:08] melmoth, you don't need to define nova-objectstore in keystone [11:08] cool. My horizon problem was that i had no s3 type service defined [11:08] now that i created one and associated the endpoit to the s3 url, horizon gives me my environment.yaml [11:08] ah ic.. nova-objectstore doesn't do keystone afaicr [11:09] cool [11:09] yep. Feels like i have deserved a sandwich :) [11:09] you can also using ec2 s3 with your private cloud openstack if that floats your boat [11:10] er.. amazon s3 [11:10] or the s3 interface to nova-objectstore [11:10] (if enabled) [11:11] that is, without using the ec2 interface to the rest of nova. [11:11] mgz, that's what he's doing it sounds like [11:12] hazmat: what is placement: local|unassigned about when working with actual clouds? [11:12] ec2 has it as an acceptable config item, but I'm not sure what you'd use it for [11:13] mgz, irrelevant [11:13] mgz, it was a bad path, to support density, that got left in. but there's jitsu deploy-to which does much better [11:14] okay, as I seem to have it in openstack_s3 but not openstack. should I just delete it from both? [11:14] mgz, yes please [11:14] mp upcoming === al-maisan is now known as almaisan-away [11:51] mark's oscon keynote http://news.softpedia.com/news/Mark-Shuttleworth-Talks-Juju-at-OSCON-2012-282209.shtml === TheMue_ is now known as TheMue [12:06] whats the default s3 port, 443 isnt it ? (ssl) [12:06] hazmat: ^^ [12:06] btw morn all [12:12] imbrandon, depends if the endpoint is https or not [12:15] mgz, hmm. given that the schema is at best advisory (ie that has no enforcement effect) it seems like something to better not document in the docs [12:59] way off topic but i konw how it can be getting into work/something on the PC, but thought i'd mention for those that not noticed yet, there has been a bad shooting in Colorado this mornign [12:59] 24 estimated dead etc, i dont really know more but check the news i'm sure its on everywhere [13:02] Yeah, happened at a movie theater during midnight showing of Batman [13:03] yea, i just noticed it on the tv on accidednt walking by, i never watch, not even news really [13:03] that sucks tho, crazy how ppl got to do extreem things [13:03] Radio during morning commute [13:04] ahh yea :) [14:14] so, i cannot use juju on my cloud because i m behind a http proxy, and if i try to use juju on a single vms with lxc, i end up with Failure: zookeeper.OperationTimeoutException: operation timeout [14:14] i can bootstrap, then i deploy a mysql charm. A new vms boots, i can ssh in it, and this timeout error appears in /var/log/juju/unit-mysql-0.log [14:15] any idea what could be the problem ? [14:15] melmoth: sounds like 2 problems [14:16] melmoth: the http proxy one is known and may be fixed sometime in the near future. [14:16] melmoth: the second one, zookeeper, is a bit ocnfusing [14:16] melmoth, do you see that on the command line or in a log? [14:16] to be honest, i never manage to have juju deploying anyhting with lxc [14:16] melmoth: but most likely that problem is caused by a local firewall blocking access from the containers to the zookeeper which was started when you bootstraped the local provider [14:17] in the service machine /var/log/juju/unit-mysql-0.log file [14:17] juju status just state the service is pending [14:17] hazmat: hey, I think natty+oneiric are failing because of an older twisted version [14:17] SpamapS, saw that [14:17] SpamapS, the api used in the openstack provider is different than what's avail in older twisted vers [14:18] hazmat: we should either fix that, or just skip the tests on << twisted 12 [14:18] and have it error out gracefully on old twisted [14:18] SpamapS, i've heard from on high that we can just stop feature dev support for older versions [14:18] I'm not really interested in coddling oneiric and natty :) [14:18] now that we have a shiny LTS to play with [14:22] SpamapS, from the mysql service machine, i can telnet to my host machine on the port the java zookeper thingy is listening to [14:35] melmoth: weird.. can you verify the address its trying to connect to? [14:37] mgz: hey so I was thinking, in the meantime, do you have a sanitized environments.yaml we can look at to try the OS backend? for say hp cloud? [14:38] SpamapS, not sure... all i can think about is tcpdump on any interface from the host... any better idea ? [14:39] jcastro: yup, so, basically I keep all the bits that matter in envvars [14:40] mgz: if you can pastebin something for me I can at least get something temporary up [14:40] We have a bunch of people in the HP cloud beta basically aching to get their juju on [14:41] jcastro: what ya need ? i have a working setup too i can get ya [14:41] i've been playing with export/import of omg :) [14:41] lol [14:41] just a sanitized hp cloud thing [14:41] hmm, if i rebootstrap, zookepers listen to another port. i wonder how the new machine knows where to try to connect [14:41] sure, one sec [14:41] with links to where on the hp page we can get the account #'s etc. [14:41] basically, just what we do for amazon in the docs [14:41] imbrandon: actually yo, branch the docs and just put it in there [14:41] it's RST time! [14:42] jcastro: ok i'll pastebin it, then i'll do the docs while you pass on the pastebin for those choming at the bit [14:42] chomping [14:42] I don't suppose you are in Rackspace Beta for their Openstack? [14:42] i do [14:42] actually 2 [14:42] one is ohso's [14:43] :) [14:43] but its the same env.y [14:43] oh sweet [14:43] just get the info from diff spots [14:43] so you can make one page and then just if you are using HP do this, Rackspace do that. [14:43] sweet [14:43] OpenStack [14:43] :) [14:43] but i know what ya mean [14:44] yup yup [14:44] ok sec [14:44] jcastro: so, my configs for canonistack and hp respectively are basically just: [14:45] then for canonistack I source ~/.canonistack/novarc [14:45] juju-origin: lp:~gz/juju/openstack_provider [14:45] and for hp I made a similar file with the envvars in [14:45] Man, we can do that? [14:46] I prefer it to writing passwords in multiple plain text files [14:47] I meant the juju-origin to an lp branch, but yeah, I understand what your setup means [14:47] ah, right, that's not needed any more, as it's landed [14:48] there's a slight gotcha with HP in that they provide both a name and an id for the tenant (read, project) [14:48] provided the name is given rather than the id all is well. [14:49] jcastro: http://paste.ubuntu.com/1101928/ [14:49] jcastro: origin can be ppa, thats what i'm using [14:50] yeah I knew about PPA, I just didn't know we could add a branch in there [14:50] basically, I could have easily tried the provider this whole time and didn't realize it [14:50] that pastebin they just need to change username: poass: [14:50] and project name [14:50] to their tennenid [14:51] jcastro: nah it only landed yesterday late [14:51] in the ppa [14:51] imbrandon: ah excellent [14:51] but you could have run the branch :) [14:51] and of course, when i run the tcpdump, then... it works. [14:51] hmmmm. [14:51] so really we just need an example like this, and then reuse the page we used for the contest to show people where they can find their tenet ID [14:51] may be the nic needs to be in promiscious mode.... [14:51] jcastro: yup, gonna do the docs up real proper right now [14:52] and the tennent-id is labled that and at the top of the api page, its most of the time their email+-default-tennant [14:52] but like SpamapS is not, some are diff, but most are like mine [14:53] there is ways to make it work with the keys VS user/pass too but i'l put that in the docs [14:53] pasted is what i know for sure works as it is right out of mine [14:54] ok /me starts that doc page [14:54] jcastro: intended to add the OSX page to the offical docs anyhow, gives me the ndge to do both [14:54] heh [14:55] indeed [14:56] it's your turn to doc. :) [14:56] btw the mirrors are all up and running fast as hell ( and swift storage backed ) and in sync within ~5 min [14:56] AND i think i got snapshoting working for it , like snapshot.debian.org does [14:56] but gotta test that more later [14:56] :) [14:57] any docs merges in the queue ? might as well do a whole docs afternoon /me looks [15:00] imbrandon: I have some to write for the openstack bits. [15:01] mgz: ok, if you want just pass me the branch when yer ready in here or PM and i'll snag it right away [15:09] mgz, awesome re docs [15:32] mgz, why the split between test mixin and mock provider.. feels like a bit of duplication [15:33] i'm finishing up constraint support and due to usage i have to add effectively the same to both [15:33] which makes the distinction quite unclear [15:33] their both remote interaction proxies [15:35] hazmat: it is duplication currently [15:35] I'm trying to find a less lame way to do testing [15:36] the problem with the mixin is it makes it really hard to focus the tests, test_bootstrap ends up caring about the implementation details of ports and launch when it really shouldn't [15:37] but a little bit of down to the http level testing is good [15:37] so, the mock provider was a stab at seeing how to write launch tests while only caring about launch stuff, not the rest of how provider works [15:38] suggestions welcome. [15:38] having provider being a monolithic gateway for everything makes splitting stuff back out again slightly annoying [15:39] mgz noted, i'll think on it, just trying to get this branch into the queue, for now i just made a common base class for the shared constraint support [15:40] so, ideally the mock provider would be ignorant of constraints, except in a subclass just for testing constraint support [15:40] but the mixin needs to know all the details of everything currently [15:41] mgz, debatable constraints permeate everywhere [15:41] anything that launches a machine needs to be at least minimally constraints aware [15:42] tests for port management and file storage shouldn't care about constraints [15:42] and making that support variable needs to wire through to an instance level variable [15:42] i'll think on it, almost done with this [15:42] yeah, get a working state for now and I'll have a look as well. [15:51] argh.. all the image ids just changed throughout hp cloud [15:51] well some regions [15:54] nm.. user error [15:57] no they do [15:57] its 110 in az-1 and someting else in the others [15:58] imbrandon, vary by region is different than changing within a region, i thought it was the latter.. but my error, i had accidentally changed my region === imbrandon is now known as Guest41572 === Guest41572 is now known as imbrandon === imbrandon is now known as imbrandon_ === imbrandon_ is now known as imbrandon [16:18] mgz, constraint mp at https://code.launchpad.net/~hazmat/juju/ostack-constraints/+merge/116027 [16:19] ace, looking. [16:21] mgz, pls excuse some of the line noise in there cause i was switching to std project style imports. [16:25] hm, would be e... right [16:25] otherwise seems to make sense [16:30] mgz, one oddity i noticed that i haven't tracked down is that it seems to make more calls to the flavor details endpoint than in imo strictly nesc. [16:31] I don't really like that both launch and the provider needing to query the flavours [16:31] i think it might be related to constraint set instantation, in which case either fixing the caller, or using a time limited cache would have value [16:31] would prefer one lookup that made a usable object [16:32] mgz, well both are used to launch machines [16:32] mgz, bootstrap uses the mixin, the launch uses the provider [16:32] both launch machines [16:32] oh.. you mean the api [16:32] but if the provider created something via a helper, the launcher could access that (as it's passed in the provider) [16:32] right. [16:32] just impl. details. [16:32] mgz, one is used to define valid values and that instance-type is available, the other to actually use the value given [16:33] and map it back to the provider notion [16:33] mgz, true.. but we don't store these persistently atm. the provider is a long live object in the daemon [16:33] and in nickpick mode, s/list_flavors_details/list_flavor_detail/g [16:34] I tyop flavor as flavour too much already, and that's another confusing one, but may as well stick with what's in the url [16:34] mgz, sure, pls put comments in the review, i'll try to hit them up tomorrow, i've got to switch tracks to something else for now. [16:35] SpamapS: hahah gotta run afk a few min, but just got an email ... rember i kept saying Sparrow client was alot like gmail etc etc [16:35] Hello, [16:35] :) [16:35] [16:35] We're excited to let you know that Sparrow has been acquired by Google! [16:35] will do, I'm nearly done for the day too [16:35] You can view our public announcement here, but I wanted to reach out directly to make sure you were aware of the news. [16:35] mgz, yeaah.. they picked a name with different spelings depend on style of english.. why couldn't they just use instance type ;-) [16:36] heya mgz pass me the bzr branch url for you doc merge if you dident do MP already and i'll get it merged in with my open stak stuff and up here in a few min [16:36] before ya bolt :) [16:36] imbrandon, hah.. cause their email clients suck'd ;-) [16:36] heheh [16:37] hazmat: sparrow really is nice tho, sad part is its $$, and now google bought em it will likely be free [16:37] but i paid :( [16:37] lol [16:37] imbrandon, me to [16:37] though i hardly use it anymore === salgado is now known as salgado-lunch [16:37] :) [16:37] yea i really wish there was a gnome port of it [16:38] like if i had the time * hahahahahahah * i might try to emulate it , hahahahahahahahah time [16:38] but i seriously would use it hands down [16:39] maybe3 the webui will be closer/almost with this new unity stuff [16:39] anyhow afk brb [16:45] imbrandon, i dug into the src of the web app stuff for somethings it will be nice, but the indicator-datetime is still a fubar for getting calendar events showing up, i'm still using evolution-webcal to get my google calendar events to show up there correctly, its hack but it works. [17:15] bcsaller, how'd the travel go? [17:16] hazmat: usual badness, no vegan food on the flight, missed connection, that sort of thing, close to 24hrs total travel time the way it played out [17:16] feeling much better now though [17:16] bcsaller, ouch.. that sounds unfortunately eventful [17:17] bcsaller, where you able to hit up the local provider thing on the plane? [17:18] or just to busy recovering from serial disasters [17:18] hazmat: I wasn't but I still have some time to finish it today. On the plane I had the dentist styled seating where the person in front leans back far enough you can see their molars [17:18] bcsaller, i'd like to get that in, and if your in vacation mode, i'd like to either hand it out to jimbaker or myself to get it done [17:19] but if you want to finish it that be great [17:19] no no, I should be able to split that out today [17:19] great [17:19] bcsaller, times like those one should carry dental floss to hand out ;-) [17:20] :) [17:20] and some nitrous [17:20] i'm closing up this conf today, but i will be on vacation myself end of day [17:21] jimbaker, nice, enjoy.. how was the conf? [17:21] i should have some time this afternoon, but i'm going to try to first get in a fix for jitsu watch for m_3 [17:21] from the tweet-o-sphere it seems like we got some love [17:21] so he can get more reporting out [17:21] hazmat, great conf, great talk by sabdfl [17:22] btw, there's a link here to the keynote, http://news.softpedia.com/news/Mark-Shuttleworth-Talks-Juju-at-OSCON-2012-282209.shtml [17:23] hazmat, people were clapping when they saw jitsu export | jitsu import - [17:23] yeah that was pretty cool :) [17:23] jimbaker, yeah.. i watched it live.. nice indeed [17:24] and of course jitsu deploy-to was pretty essential to robbie's demo on openstack, so good work on the jitsu front [17:25] * hazmat takes a bow [17:25] hazmat, well deserved indeed!!! [17:26] speaking of pushing code out.. its time to release charmworld / browser [17:28] that should be quite exciting, seems like some pent up demand for that [17:28] hazmat: oh? [17:28] hazmat: btw, if I didn't make it clear before, +1 from me. :) [17:29] SpamapS, yup, code will be avail in next 20m or so, just need to do some audit and bit fiddles [17:29] hazmat: werd. License? [17:29] AGPL [17:30] but of course. cool. Does it have a "download the code" feature or is it just AGPL in spirit? [17:30] SpamapS, hmm.. it doesn't have that feature, does a link suffice or does it really need to be able to tarball itself up? [17:31] well thats the only thing AGPL guarantees over GPLv3 [17:32] that if the service has a way to DL the code, you can't disable it [17:32] SpamapS, is that the only mechanism for its delta? http://en.wikipedia.org/wiki/Affero_General_Public_License [17:33] the download source feature... [17:33] hazmat: its the one described by GNU's website [17:33] I admit I have not actually diffed the licenses [17:33] i'd have to delay releasing it for a few weeks to implement that, due to other priorities [17:34] i'm fine with just putting it out there [17:35] its not a requirement [17:35] I was just curious [17:35] SpamapS, looking over some the other AGPL web apps i don't see many with that feature readily available from browsing the site, most just link to their src repos [17:35] hazmat: I'd say release w/o it and open a bug [17:35] ie. https://gitorious.org [17:35] hazmat: yeah, "they're doing it wrong" is probably the answer [17:36] its interesting to note that such a feature is most likely implemented via download tarball [17:37] vs. actually zipping up the runtime code [17:38] ah.. ic [17:38] so it just needs a link [17:38] for the download source feature that's self hosted,and then deriviatives have to comply [17:38] well not nesc. self hosted, but that gives the clearest intent [17:40] hazmat: Yeah a link is ok, but you have to make sure that link is accurate then [17:40] hazmat: but IMO this is not strong enough to ensure user freedom [17:40] SpamapS, agreed a runtime source extraction and zip would be best [17:40] hazmat: since the site can of course just serve up a link to code w/o all their optimizations. :) [17:41] SpamapS, the would be in violation of the license then [17:41] hazmat: "prove it" [17:42] SpamapS, your doing a great job of convincing me not to release ;-) [17:42] jk [17:47] hazmat: no you're good don't worry about it [17:47] hazmat: though you should have a link to the code hosting page :) [17:51] nah there isnt a requirement to have it in the footer or a tar ( no same medium e.g. cant make a download binary and a cdrom source only and comply , clause like gpl ) but "anyone that has access to the service also have a clear way to get the full source required to run it completely" or some cruft very close to that [17:51] hmm. we have 28 charmers extant [17:52] new group [17:52] jitsu ninjas ? [17:54] yeah.. jitsu hackers is a bit more managable [17:55] so hazmat along these lines, i been thinking the last few min ... what if ( in spirit, not gonna force php code on ya or even the exact conventions or anything really hahaha ) we take the general idea of what i was doing and lay a public rest api down ontop of the charmdb monogo to provide any of the information the gui may need then we get the interface free for other apps later should some mashup seem cool [17:55] imbrandon, definitely, that should be pretty trivial [17:56] imbrandon, i need to json search anyways. its not quite the same db though [17:56] right thought so but just wanted to be vocal [17:56] json search ? [17:56] imbrandon, i'm thinking just add /json to any of the urls to get a json representation [17:56] as in learn it or ? [17:56] ohhh yea [17:56] imbrandon, as in i need it ;-).. charm browser has a xapian backed full text search index [17:56] wow I think I must have read some crackheaded interpretation of the AGPL at one point [17:57] * SpamapS has learned this lesson a few times.. :P [17:57] cool ok, yea i like implemnting api's i'd be happy to, and i've learned to no matter what ya think version them from the get go [17:57] lol [17:58] if you dont mind i'd like to crack at the json api some with ya if not do it all etc ( i know your swamped ) [17:58] will give me a reason to tighten up my python a bit [17:59] imbrandon, sounds great [17:59] but i know its shit so feel free to reject me many times :) i got thick skin [17:59] my python :) [17:59] but it can only get better right ? [17:59] ok i need to run, sis things for a few hours [18:00] back in actually probably one [18:00] imbrandon, lp:charmworld [18:01] * imbrandon is still keeping jujube going tho i dunt care if its redundant /me likes php as well plus i can protoype things in node/php ( current code ) that i can later translate into py :) [18:01] rockin, ok back in like one hour [18:01] ^5's hazmat [18:05] * hazmat looks for food === salgado-lunch is now known as salgado [18:54] okay, I've proposed a doc branch with the basics in it. [18:54] and that's me for the week. [18:55] can someone review the doc branch? [18:55] I'd love to blog about this branch today before EOD! [18:58] oh, nm, it's docs, I can review it myself! [19:11] *bugs imbrandon * [19:12] hey, hp cloud, is that in the ppa yet? [19:12] yes [19:12] marcoceppi: well, for precise+ [19:13] marcoceppi: the code uses some twisted 12 API stuff, so it likely won't work on oneiric and natty [19:13] So if I have precise, I should be good to go? [19:13] yes [19:13] 0.5.1+bzr559 [19:15] Cool, must be missing an update [19:20] Yikes, I'm on a precise machine with an oneiric package of Juju [19:29] marcoceppi: hah man, that must have sucked [19:29] never really noticed [19:30] I use the ultrabook for everything now [19:30] so trying to deploy from a desktop [19:30] I was like "why is the screen so big" [19:30] ok so this needs the PPA and all that jazz right [19:30] hum [19:31] jcastro: need to make a few clarifications in the post [19:31] otherwise it bootstrapped [19:32] Going to do a few bootstraps [19:32] cool, I'm working on an example generic openstack one [19:32] s/bootstraps/deploys [19:32] Now I can blow this cloud up [19:35] * imbrandon returns [19:35] sorry man, things been a little nuts on the homefront last week or two [19:35] wasup? [19:36] nothing just trying to sort out the different openstack providers [19:36] ahh kk, yea i had to bolt to run with sis for a bit but i'm back now, gonna finish up those docs i started a bit ago [19:36] there's an incoming merge request [19:36] if you wanna merge it up [19:36] kk [19:37] sure thing [19:43] SpamapS / hazmat : ok this kinda sucks, the juju agents are not started really with a true init script ( that i can find ) and to add insult to injury they dont produce a .pid file anywhere ( some domain sockets in the /var/run/juju/ but no pids ) to cat kill hup et al on the units, after some units running ohhh 55days without a reboot and 6 charms ( pretty much evry suborinate ) each with its own daemon ( really? wth ) this tends to be a signifig [20:51] SpamapS: CLINT. G+ me for a sec? [20:51] I have a jitsu question that's easier to just ask you [20:56] jcastro, i'm also around if that works [20:56] imbrandon, their upstart'ified [20:56] for sure hazmat, inviting [21:14] marcoceppi: imbrandon: hey any of you have an anything deployed on hp cloud to generate an svg? [21:14] not and generated to svg but i got about 8 juju boxes on hp cloud atm [21:14] or so [21:15] yeah [21:15] the mirrors are juju [21:15] :) [21:15] I just want a deployment visualized [21:15] to prove I am not lying. :) [21:15] heh ok, mine are all mostly single testing crap, let me finish the rax edit and i'll deploy a cool layout [21:15] unless marcoceppi gets it first [21:15] ~10 min tops [21:15] imbrandon: hey so hazmat tells me the rackspace cloud stuff isn't quite ready, so I'll cut it for now and we'll talk about rackspace when we get there [21:16] imbrandon, juju status --output=status.svg --format=svg [21:16] yup, just need to deploy something intresting instead of a one off mirror charm :) [21:16] heheh [21:16] omg? [21:16] jcastro: it "works" heh [21:16] imbrandon, mediawiki x 5 + haproxy + mysql + memcached [21:16] ;-) [21:17] YEAH! [21:17] nagios and mysql replication for bonus [21:17] no nginx-proxy ? hahah jk, kk comming right up [21:17] ok , then jcastro go save the rackspace images from the poost [21:17] on ask [21:18] actually [21:18] one sec [21:18] jcastro: there is the all prettied up screenshots you'll want for later then [21:18] [1]: http://f.cl.ly/items/0k0e20291e3y2C3T0S3p/Selection_001.png [2]: http://f.cl.ly/items/0k0e20291e3y2C3T0S3p/Selection_002.png [21:19] * jcastro nods [21:19] * imbrandon deploys da world , back ina few [21:19] jcastro: btw i was gonna use those number annotations to say what bit that was to get [21:19] etc [21:20] hopefully thats obvious [21:20] yeah [21:20] I mean instructions will be fine [21:20] no i mean look at the images [21:20] already done, [21:20] #1 #2 etc [21:20] right [21:21] jcastro: back from lunch [21:22] jcastro: ohh and i'll add in the anouncement of the GUI OSX installer too for ya to chaulk up [21:25] SpamapS: I was just prepping to blog about the openstack branch [21:25] s/branch/feature/ [21:25] it aint no branch baby [21:25] its in the PPA [21:26] PPA I meant [21:29] http://www.jorgecastro.org/2012/07/20/democratizing-the-cloud-here-comes-native-openstack-support-for-juju/ [21:29] SpamapS: woo! [21:29] SpamapS: man dude, so basically, I think import and export are cool [21:29] and wanted to tell the world, in conjunction with this provider landing [21:34] jcastro: can you change the default-image-id? [21:35] heya xnox , finaly see ya on irc :) [21:35] for me in region 1 of hpcloud it's: default-image-id: "8419" [21:35] not an array [21:35] and yes on HPCloud you can, infact must from my default only works in az-3 [21:35] imbrandon: playing with hpcloud juju, seems to have created instances =) [21:36] xnox: see note at bottom "things in [] need changed" [21:36] its not an array :) [21:36] true =) [21:36] but a sensible default - latest ubuntu seems appropriate, unless all image id's are different across regions [21:37] and yea 8419 is for one az, 120 is another, not sure on the third, was gonna lookup the proper API for juju just to pick like aws later [21:37] xnox: yup, known issue [21:37] thus one of the fields marked to fill in [21:37] :) [21:37] jcastro: there's a link to sabdfl's demo somewhere.. you should link to it from your blog post [21:38] its only for us early birds tho [21:38] I found one but it didn't look legit [21:38] it's like the guy recorded it streaming from oreilly [21:38] and I didn't want to do that wrt. licensing of content and stuff [21:38] softpedia.com's is the legit one [21:38] http://news.softpedia.com/news/Mark-Shuttleworth-Talks-Juju-at-OSCON-2012-282209.shtml [21:38] and its published on youtube [21:39] http://www.youtube.com/watch?v=0UHXW10t38I [21:40] xnox: btw i had to stop the mips stuff half way yesterday due to some rl stuff, but i made a metric ton of improvements to the speed and reliabilty of whats there now [21:40] btw [21:40] and i'll restart the arm ( not mips ) tonight [21:40] ? [21:41] imbrandon: what are you on about =) [21:41] did you not request arm mirrors for sbuilds ? [21:41] must be mixin u up [21:41] xnox: the image id problem is pretty big actually. We need a resource map type service for all the known public clouds.. and for any private clouds a user might have access to [21:41] IMO we should just start with a local file for that [21:42] SpamapS: yeah, I guessed from the console dropdown / existing instances. But there was no euca-describe-images [21:42] we can maintain our own somewhere public [21:42] SpamapS: local file could be "nova list-images >> newlist.txt" :) [21:42] for the openstack / hpcloud [21:42] ah =) [21:42] thanks. [21:42] xnox: nova manage can do it [21:42] SpamapS: as a user or as an admin? it was not clear to me [21:42] SpamapS: yeah that's the one, you can even see the dude hitting record. [21:43] nova image-list I think [21:44] SpamapS: in the intrem i can add a cron to the mirrors i already have on HPC to run the nova client and export the list and then do some text massageing on it to get the info we want as a temporary solution, then check it into bzr on LP or something automatic [21:46] imbrandon: that sounds ... painful [21:46] imbrandon: why not just keep a list of cloud->region->release->imageid [21:46] SpamapS: btw i provided a little rational to one of the points on your review ( before you groan , i agreed 99% ) just wanted to point it out to ya so you could tell me if i'm off my rocker before i get back to the charm tomarrow [21:47] imbrandon: good I was a bit confused by the whole thing [21:47] SpamapS: thats what i'm taliing about doing, but it needs to be automatic, you seen how often hpcloud depricates amis ? [21:47] heh [21:48] basicly it came down to 99% of "i knew those bits was not done ... got burnout needed feedback ... but this other bit is intentional and here is why ..." [21:48] and thats it [21:48] :) [21:51] SpamapS: btw how do we get that info now from ec2 ? from the json provided by cloud-images.ubuntu.com ? [21:52] imbrandon: https://cloud-images.ubuntu.com/query2 [21:52] imbrandon: and I'm certain we'll be adding it for other clouds as well [21:52] "< SpamapS> imbrandon: why not just keep a list of cloud->region->release->imageid" <-- thats exactly what i was getting at too btw, just automated with the datasets i knew to exist because i know HPClouds list changes very very frequently [21:53] imbrandon: well ideally the Ubuntu project would maintain the images and thus would rewrite the known good version of the file whenever the ids change. [21:54] ohhh 100% i was just talking for the next week/month in the mean time considering it would be only about 30 minutes of dev work tops to kick out , easy to justify tossing out later [22:04] SpamapS, getting query2 indexes against 3rd party providers is part of the master plan [22:04] its nesc. imo for a good ootb experience [22:04] imbrandon, the ec2 stuff is generated as part of the image build process that canonical does for ec2 [22:05] we don't have that for third party provided images in other clouds [22:05] yea i ment more of the current retreival process [22:05] imbrandon, there is no retrieval process, its part of the publishing process [22:05] not creation, e.g. so i had a result to decompile if ya will [22:06] juju has to retreive it from somewher [22:06] to know what ami to use [22:06] imbrandon, it queries it from cloud-images.ubuntu.com [22:06] not query2 format though [22:06] not yet anyways [22:07] i'm getting random desktop crashes all of a sudden, lame [22:07] rockin , kk yea i was gonna see if i could spend less than 30 min to hack something in the same format from hpcloud , just cuz ... i can try :) [22:07] i have alot today, but i thought it was me [22:08] i hate that damn popup [22:08] imbrandon, yeah.. me too, but i'm convinced something went sideways.. yeah.. i get the popup to [22:08] btw deploys almost ready [22:09] btw why on earth does paste.ubuntu.com always want me to login with openid, makes me a lil ill [22:10] SpamapS / jcastro / hazmat : http://paste.ubuntu.com/1102549/ [22:10] just waiting on booting etc [22:10] and to make sure nothing went wong .... [22:12] in addition to all the ones you listed hazmat i made memcache X 2 and added newrelic subordinate to mysql and newrelic-php to the mediawiki's [22:12] :) [22:15] SpamapS: btw fwiw i had/have full intentions on making the php helpers generic ( trying to do it from the getgo mostly ) and ultimately packageing them up as "upstream" ( myself maybe unless others join in some ) into a .phar and add as a php lib/cli to charm-helpers as like the charm-helpers-php binary [22:15] pkg [22:16] just say little use in it yet, infact a pretty big hinderance considering the number of chnages needed still and there is only 30ish sloc now [22:17] s/say/see [22:18] imbrandon: but thats 30 new lines of code that you invented [22:29] SpamapS: just like every sed and 98% of all the other charms ( bash ) [22:30] not seen one charm minus the django one using puppet that has a real template engine, and besides your gripe was it was in the charm not that i was doing it [22:30] so whats the diff ? [22:31] i mean i'm not saying your 100% wrong, but to play the 10yr old, but but but eveyrone else is and far worse than me , i thought it was pretty elegant for the simplicity of ti still [22:32] and likely far more reliable than trying to use a html centric template engine , and far lighter than using a config one and far more robust than sed [22:32] :) [22:33] but again the point was in/out of the charm is what i digested anyhow tho [22:34] and solely on that its a bit of a strech [22:37] *cough* http://jujucharms.com/charms/precise/ceph/hooks/ceph-common.sh i'm thinking does much the same thing as my /hooks/lib/common :) [22:47] imbrandon: read that again.. you'll only see heredocs and appending. ;) [22:48] imbrandon: my point isn't that its bad to do a little abstraction around templating. My point is that the problem space did not warrant that. [22:49] it does imho, its that 3 like very clear function to those that dont even code php OR a much more convoluted sed secript [22:49] sed is actually a horrible way to do these things IMO [22:50] templates are for spaces where you need to be able to see and edit the presentation outside the logic. [22:50] my point is its clear even to non-php people, and compare it to the complexity of chef-common.sh [22:50] Otherwise, just put stuff in .d files or append with variables [22:50] ceph-common.sh* [22:50] imbrandon: did you read the ceph charm? [22:51] its doing *quite* a bit more than the nginx charm :) [22:51] sure, NOW but you think the one varable that is changed will be it ? [22:51] rember OMG [22:51] and that was without forethought mostly [22:51] was the templating the problem with OMG? [22:52] and to top it off i planed on using it as a general helper lib [22:52] yea [22:52] err no [22:52] really? It wasn't apache.. and the plugins.. and the lack of caching? [22:52] thats totaly not the point, and yea i said wrong word [22:52] i was trying to agree :) [22:53] i wasent saying it was the problem i was saying it was complex [22:53] keeping the varables of my defence here in line :) [22:54] I don't want you to be on the defensive. [22:54] I actually want you to be showing me the benefits of your choices. [22:54] i mean honestly i adapted that php pretty much line for line what i did in shell for omg [22:54] Because my point wasn't "that sucks" but "why do that?" [22:55] well its really hard not to be when i can look at ANY other charm and its far more complex, i explained the why's in the reply [22:55] Also its entirely possible that all the other charms did it wrong too [22:55] so don't use that as a reason to repeat their mistakes [22:56] oh i'm not saying they dont infact i think they do [22:56] and after a review of 3rd party tools and what they did thats what i had come up with [22:57] in bash and php identicly [22:57] alright, that makes sense. So what, again, is the charm supposed to do? [22:57] 3rd party tools + what the charms did [22:57] you didn't respond to that. :) [22:57] that kinda ran into one [22:58] well thats kinda loaded, depends on the point in the lifecycle your asking about [22:58] heh [22:58] but tl;dr is ... [22:59] deploy to a know state, and always be in that know state as config options are swaped in and out by the devops or other charms [22:59] use that info of the state desired and do it the best way i know how [22:59] as a charmer [22:59] tl;dr ^^ [23:00] in this instance, i made the choice of a creating a new tool for the job where i felt others were overkill <--> not on target enough [23:01] but said tool likely does belong in its own lib sure, but would we not have this same convo if it dident ? [23:01] err did [23:02] imbrandon: still don't understand what the charm is supposed to do [23:02] imbrandon: walk me through how a user can make use of it [23:02] and i think other charms are doing it wrong by not making the same decision, instead are using sed or other goto bash staples [23:03] SpamapS: well that one is very much like you said only apt-get , but it was created on the days that we talked about doing that kind of thing now and later expanding them to be ideal so inheritance or similar otions emerge [23:03] but if you look at nginx-prox it builds on it [23:04] and very much in a way that repeating [23:04] but only becouse of aformention problem [23:04] eventually it as well as other will build on this [23:05] thus i started planning for things like nfs / sharedfs [23:05] etc [23:05] * xnox spinning up juju on HPCloud 41/100 instances done =) [23:06] ProviderInteractionError: Unexpected 413: '{"overLimit": {"message": "RAMLimitExceeded: You can only allocate 4096 RAM (in MB)", "code": 413, "retryAfter": 0}}' [23:06] =( [23:06] limited to 20GB per az combin iirc unless you request an increace [23:07] xnox: NICE [23:07] mmm mine should be done tooo /me goes to make the svg [23:07] SpamapS: i mean am i wrong in the thingking ( over all ) [23:07] imbrandon: re your point about inheritance.. if nothing else, it shows that we need inheritance. :) [23:07] right [23:07] imbrandon: I had an idea of how we could do that in the store w/o juju's help [23:08] imbrandon: what if we use my charm splicer to build charms that inherit others? [23:08] ahh i bet you have the same idea me and marcoceppi came up with at uds :) [23:08] hahahahah yup [23:08] So just have a launchpad project.. like base-charms .. and anything in there will be scanned for a splice.yaml and spliced up and pushed into lp:charms [23:09] like 30 minutes after me and him met ( on way from airport to hotel on train ) we talked about JUST that idea, that was gonna be our laptop winner :) [23:09] but.. really.. I can't see why we don't just do this in juju [23:09] it would be so easy [23:09] we was gonna hack up hooks [23:09] I tried splicing every charm that I could into one charm [23:09] to call other chrms and deploy em [23:09] it was fun :) [23:09] as deps [23:10] I should merge my splice command into charm tools [23:10] e.g see if service running if not, deploy default ourself [23:10] defrent method same thing tho, well would enable the same thing [23:11] oh [23:11] thats definitely something entirely different [23:11] lazy loading services.. sounds awesome [23:11] i think there was bets on how fast the charm would simultainously win the laptop and be banished from existance :) [23:11] hahaha [23:11] I should have submitted a Frankencharm [23:11] like, splice * into one evil charm to rule the world [23:12] i actually had fully intended to , as the nginx stack* at the time, but other priorities like full as hell charm rooms emerged [23:12] heh [23:13] interesting.. LAX<->San Diego commuter flights are about $245. Driving + parking at the Sheraton would cost about the same.... [23:13] basicly the idea was to "juju bootstrap && juju deploy mywordpress-subordinate-theme-and-plugins" [23:13] and thats it .... [23:13] just wait for it all to come up [23:13] and taking Amtrak will be about $100. [23:14] if I didn't know that Amtrak's wifi was TOTAL CRAP I would say its an easy win. But Amtrak also comes with thugs.. hm. [23:14] hahah [23:14] amtrak thugs > [23:14] never equated the 2 [23:15] SFO BART + Punks , yea [23:15] heh [23:16] never fails every time i'm in SFO ( like 7 or 8 times not ) i see mohawks and leather studed clothing , no matter what decade it is [23:16] s/not/now [23:16] The only people who ride Amtrak are people too poor to have cars and hippies. [23:17] heh, i hate driving [23:17] But I'm kind of thinking for this particular conference its worth it since I don't really need a car.. I have tons of friends in San Diego who can drive me anywhere I want to go. [23:17] I wonder if there is a cheaper daily lot somewhere tho.. $22/day is ridiculous [23:17] i only own a motorcycle for the last 8 or 9 years, and even that i dont drive like a daily driver, its a for fun bike and i use mass-transit/cabs everywhere [23:18] even out here in hickvill midwest :) [23:20] * imbrandon hugs his almost to be considered classic 1996 Ninja Neon Green 7ZXr he bought new [23:20] Ninja :) [23:21] dude, i love em, had a honda, and 2 ninja's now [23:21] layed the 1st one down on the highway ... at speed [23:21] stoped riding for a year, then got a new one ( in 96 ) and kept it :) [23:22] honda was always too big for my frame [23:22] 1100cc is about all the horse i need :) [23:22] apparently my quota request must go through special approval.... so archive rebuilds until monday.... [23:22] well I can work within 100GB RAM limits.... [23:23] I've asked for 1600GB RAM limit... [23:24] SpamapS: u noticed the fairly new ec2audit tool ? would be nice to make use of that + cross cloud etc [23:24] somehow [23:24] xnox: i just closed my 2nd acct or i'd loan it to ya [23:25] imbrandon: .... hmm... I have put two requests in for AZ1 and AZ3 regions. And I'd like to see when they will come back to me about it ;-) [23:25] and on HP i'm nearing my limit too between one az of mirrors and long running dev instances + one az of trying to rebuild ubuntuwire.com et al + one az of random juju deploys and destroys [23:26] perfect request at 4:30pm Friday LA time.... =) and night time in EU [23:26] xnox: the few times i used support it was about 24hrs [23:26] weekend comming up ;-) [23:26] yea but iirc their noc and support is manned 7 days [23:26] imbrandon: can you start mirroring AZ1 mirror into AZ3? [23:27] xnox: i got one better for ya, about to unleash the swift mirrors [23:27] ???? =P what are those? [23:27] ( in *.region-a.geo-1 ) [23:27] the CDN backed thingy? [23:28] yea but if used local its faster than the ones i have up now [23:28] and pulling from my house in KC i saturated my home cable as well [23:28] :) [23:29] will be another 6 hours or so before they are done then i need to run an initial checksum on them [23:29] so tomarrow sometime, BUT in the meantime if you drop the internal. off the hostname [23:29] there is public dns for those too and its much faster in az 2 and 3 than us.archive.ubuntu.com [23:29] ;-) [23:30] for the meantime [23:30] that was my plan, I did warn the HP guys about high traffic between the regions if they grant me quota, and the mirrors are not done yet ;-) [23:31] but yea i started to make the 6 xlarge instnces required for full mirrors in all 3 az's then said screw it and put effort into doing the swift object store one that only requires a single xsmall in one az to kick off jobs and cleanup etc [23:31] and ended up being hella faster anyhow [23:32] =)))) [23:33] i havent put a cname on it yet, and its about 6 to 12 hours behind depending on the package compared to the others that are ~5 min [23:33] but http://hdf00356c5f10fb3768dd5e9db5fcf855.cdn.hpcloudsvc.com [23:34] you should be able to hit it and test it ... would not use it just yet til final checksum and cname in place tho [23:35] btw one drawback is there is no browsing , same as on s3 [23:35] gotta know what your after [23:35] heh [23:36] http://hdf00356c5f10fb3768dd5e9db5fcf855.cdn.hpcloudsvc.com/dists/quantal/Release [23:40] lol [23:40] good night ;-) [23:40] * xnox is in EU [23:41] gnight :) [23:43] SpamapS / hazmat : http://paste.ubuntu.com/1102672/ [23:44] normal status [23:45] outputs all good, but thats from svg [23:45] if you want i'll pass one of yall my env.y credentials and all if you wanna toy with it to get the pic for jcastro , as the env is pretty sweet atm [23:47] wikimedia x 5, mysql, memcache x 2, haproxy, newrelic(mysql), newrelic-php(wikimedia) x 5 [23:47] all related to the corosoponding services etc [23:48] * imbrandon will leave it up indefinately over the weekend and debug himself as well on and off ... [23:49] ahhh svg no likey hyphen in the env name ... [23:49] that could be a problem. [23:51] ( for completeness when someone reads this, norm and json status are fine, only svg + the error looks to be a svg not agree with my env name of websitedevops-com-hpc [hyphens] [23:51] )