[01:00] hazmat: pong (but not sure how long I'll be online ;) [01:07] hazmat: and with that, I disappear [02:44] SpamapS, maybe tomorrow then [03:37] SpamapS, negronjl http://jujucharms.com/tools/store-missing [04:05] hazmat: nice ... thx for the tool [04:49] hazmat: cool :) [04:49] another queue for charmers to work on really [04:50] but sort of pointless without the import logs for the charm store === TheMue_ is now known as TheMue === zyga is now known as zyga-afk [08:43] SpamapS: did that, I've been having some problems promulgating as well :( [08:43] SpamapS: tried to fix them with m_3, maybe I can grab ahold of you later === stevanr_ is now known as stevanr === ejat- is now known as ejat === TheMue_ is now known as TheMue === telmich_ is now known as telmich === AlanChicken is now known as alanbell === alanbell is now known as AlanBell === zyga-afk is now known as zyga === zyga is now known as zyga-afk === garyposter is now known as gary_poster === zyga-afk is now known as zyga === _mup__ is now known as _mup_ === mhall119_ is now known as mhall119 [15:07] quiet friday [15:11] m_3: heh yea, /me has been fighting with RPM's and charm-tools-that-should-be-called-i-hate-the-osx-bsd-tools [15:12] * m_3 grin [15:13] took me an hour to figure out bsd getopt != gnu-getopt [15:13] heh [15:14] imbrandon: hah! perhaps we should rewrite all those in python [15:14] btw i came accross a very slick way to install juju on OSX now without the need for brew at all , gonna script it up and it can live in the official juju branch then ( that assumes it would be approved for such but i dont see why not ) [15:15] SpamapS: i thought seriously about it, i got a few patches in either case :) [15:16] added some error checking too and checking of exit codes from the commands / scripts [15:17] and normalized the help where it called them commands and scripts interchangeably [15:17] but its all like one big commit right now, gonna split it out here in a few in case you dont like all the changes its easier to cherry pick [15:18] * imbrandon gets back at it before i get destracted too much [15:20] oh and got a 3-month free window azure thing too today so thought about trying to make a provider , but likely will fail if/when i get to that part [15:25] SpamapS, m_3, jcastro: if i did the leg work and got a Proof-of-concept OS X jenkins deploying code then running unit/regression testing for juju and charm-tools what do you think the chances of getting a VM running in canonical's to do that would be ? [15:26] canonical's datacenter* [15:29] imbrandon: you're working on juju for osx right? [15:29] yup [15:30] cool, as someone who has unfortunately been forced to use it. I sincerely applaude your effort :) [15:30] doing some touchups today matter fact, some things still not 100% right but its getting there :) [15:30] sweet [15:31] I wonder if juju can deploy to rackspace's managed cloud, probably not I assume [15:31] fixed up bash completion and got most of the charm tools runnning , most importantly charm proof [15:31] it can [15:31] interesting... [15:31] i use rackspace a little [15:31] I figured it would be one layer abstracted too much [15:32] nah, they have pure openstack [15:32] e.g. it was them and nasa that started that show anyhow :) [15:32] yeah, I'm familar with it all [15:33] I'm wondering if a managed cloud user can get openstack creditials to deploy services to [15:33] but yea, you can, you neeed to use SpamapS's external s3 patch or the OSAPI provider thats on LP for testing only yet [15:33] but it works [15:33] huh, awesome [15:33] shazzner: i'm not sure 100% i just asked support and now they are in my cpl [15:34] few weeks ago [15:34] cool, let me know what they say :) [15:34] no thats what i mean, i asked them , and they gave em to me [15:34] oh I see what you mean [15:34] and put them in my cpl alogn with the API key :) [15:34] damn that rules [15:34] imbrandon: there's really nowhere to run such a VM afaik, but get me a VM and I'll see what I can do [15:35] m_3: ok, i'll see if i cant have it sometime next week [15:35] :) [15:35] imbrandon: the qa lab hardware is up/down all the time with openstack and maas testing... but I'll ask around [15:35] if not cool i understand, more convience for others than me [15:35] kk [15:37] m_3, there's one more charm which still showing some ill effects from the distro branches [15:37] m_3, cs:oneiric/hadoop-master [15:37] ie. can't be checked out, as it references a precise branch [15:38] its at this unreachable branch lp:~charmers/charms/oneiric/hadoop-master/trunk === ejat- is now known as ejat [15:40] hazmat: that's a strange one... b/c it shouldn't be in precise [15:41] in oneiric, the charms are hadoop-master, hadoop-slave, hadoop-mapreduce [15:41] in precise, it's just hadoop [15:41] m_3, right.. perhaps because it got yanked? [15:41] no clue... lemme look at the branches [15:41] m_3, its the only charm branch thats inaccessible [15:49] hazmat: man that's just hosed... bzr info's not showing that oneiric/hadoop-master/trunk's stacked on anything but it's referencing the precise/hadoop-master/precise branch [15:54] oh, I see what's going on here... lp's trying to be smart.. the branch has no reference to series... bzr+ssh://bazaar.launchpad.net/+branch/charms/hadoop-master/... and it gets filled in magically [15:55] makes sense that we can't manually manage series then === fenris is now known as Guest59258 === fenris_ is now known as Guest12299 [16:07] SpamapS: around? === Guest12299 is now known as ejat === cheez0r_ is now known as cheez0r === cheez0r is now known as cheez0r_ === cheez0r_ is now known as cheez0r [16:34] 'morning all [16:35] hazmat: ok, try it now [16:35] negronjl, g'morning [16:35] hi hazmat [16:35] hazmat: actually retry all hadoop-{master,slave,mapreduce} [16:35] m_3, in progress [16:36] negronjl, how'd it go? [16:37] hazmat: It went very well... I never expected so many people to be so interested in Ubuntu in Azure [16:37] negronjl: yeah, it's totally cool imo [16:37] M$ is about 60-70% to where Amazon is in features [16:38] and they are giving away a lot of very interesting features ( metrics and such ) for free [16:40] negronjl, i'm curious to try out their prod platform to see what the i/o is like [16:41] negronjl, if they can just make the config stuff work it would go alot nicer [16:41] hazmat: Prod IO is about 30M/s [16:41] negronjl, nice.. that's roughly 6x dev [16:41] hazmat: yup [16:41] gmb: I'm off this week, but peeking in here and there [16:41] hazmat: They were telling me that their dev/beta env was pretty much worthless [16:41] negronjl, did you do a juju demo? [16:41] SpamapS: Okay. I'm about to EoD, so I'll drop you an email instead and we can talk when you're back. [16:42] hazmat: No ... I just had the presentation in a loop on my laptop and had people ask about it and such after looking at it [16:42] negronjl: cool [16:43] gmb: I do plan to get your python modules into Debian/quantal *soon* [16:43] gmb: you're just suffering from an unfortunate series of events of very high priority ;) [16:43] SpamapS, i think i'm going to need to rewrite charm-tools check to be more amenable for library usag [16:44] SpamapS: Okay; is there anything at all we can do to help? Should we rope someone else with packaging experience to help out rather than just loading you up with more work? [16:46] hazmat: library? [16:46] hazmat: its pretty easy to parse.... [16:46] hmm [16:46] yeah, I was just trying to fiure out what hazmat was talking about [16:46] why not speak the international language of shell love... exec() [16:47] because its nice to avoid fork bombs on small instances ;-) [16:47] but fair enough [16:47] i'll give that a short first [16:47] Even if there are 10,000 charms... thats not too many forks if you wait() on them :) [16:48] SpamapS, there's also some thing this would pick up better if it used the juju.charm class [16:49] there's a couple of invalid things it misses [16:49] No [16:49] it must not share code w/ juju [16:50] the whole point is that it is an independent static analysis tool [16:51] SpamapS, what value does that provide, other than inconsistency? [16:51] it can do additional checks and policies [16:51] its an air gap to protect against bugs in the juju implementation [16:52] charm proof is meant to assert the way the spec says charms are composed [16:52] If it throws errors where juju does not, then *one* of them has a bug. [16:52] the juju analysis is static, bugs should be fixed there, we already have multiple implementations of that.. [16:53] SpamapS, the problem is that its not throwing errors, where juju does [16:53] and rather than copy the implementation, why not just use it.. [16:53] then thats a bug in one of them too [16:53] (unless its a runtime error, not a static format error) [16:55] hazmat: I'm fine with charm proof *calling* juju's version of "is this formatted correctly" after it has found that it thinks the charm is formatted correctly. But I don't want them sharing code. [16:56] we could do the charm-tools in ruby then it would not be an issue ... just teasin before someone takes me serious :) [17:10] SpamapS: I'm eod' [17:10]  [17:10] arg [17:11] 'ing now, so please email me if there's anything that we (~Yellow, ~launchpad or others) can do to help you with the packaging of our charm-tools additions and python-shelltoolbox. We're happy to try and find someone else to do the packaging if you've got too much on your plate. [17:12] gmb: are they on LP branch somewhere ? [17:12] gmb: given that buildbot is being abandoned, is anything else going to use these python charm helpers? [17:13] wait should they be charm-helpers or charm-tools [17:13] imbrandon: they exist.. just simple python modules that I want to have packaged before we start relying on them. [17:13] imbrandon: definitely charm helpers [17:14] which is a part of charmt-tools [17:14] SpamapS: i will likely have time over this weekend assuming this rpm stuff falls in line, if that will help [17:16] imbrandon: sure if you can dig them out [17:16] k [17:26] SpamapS: so python-shelltoolbox needs to be built as a package and marked as a dep of charm-tools? re: https://code.launchpad.net/~yellow/charm-tools/trunk/+merge/101554 or is there something more going on here? [17:27] m_3: right, well ,a dep of the python bit [17:27] anyway, I'm out again [17:28] be back to work on Monday [17:28] SpamapS: great weekend! [17:30] SpamapS, have a good one [17:30] * hazmat gathers a list of all the charms proof has a runtime error on [17:36] hazmat: did hadoop-xxx work now? [17:36] (oneiric and precise) [17:37] m_3, yup [17:37] m_3, thanks [17:38] that's not bad... charm proof only barfs completely on two charms (out of 360). [17:38] http://paste.ubuntu.com/1030745/ [17:38] the terracota one though juju is fine with in practice, has an odd definition of its peer interface though [18:51] is there some backdoor way to go about clearing the charm cache from an existing juju environment, so that juju deploy will always use the local:charm regardless of revision? [18:54] adam_g: I think you want "juju deploy --upgrade" [18:58] benji: will that work for a local charm that has an earlier revision # than whats currently cached/was last deployed? [18:59] adam_g: http://bazaar.launchpad.net/~juju-jitsu/charmrunner/trunk/view/head:/charmrunner/snapshot.py clean_juju_state does some of this... it was written against the local provider though [18:59] adam_g: or set a cron to rm it from /usr/lib/juju/* on the bootstrap node [19:00] kinda hackish tho [19:00] phh m_3 idea much better [19:00] :) [19:01] adam_g: iirc juju upgrade-charm --repository /usr/local/lib/charms/ local:mycharm [19:01] will make the version auto inc [19:01] err revision [19:02] hmm. thanks, ill mess around with all of the above [19:02] actually tho you would need to drop the local: from that it wants the service name, but it should still get it from local if it was deployed from there [19:03] adam_g, --upgrade should do the trick [19:03] adam_g, it will increment the revision [19:03] if not snapshot script should do it [19:03] adam_g, actually one more way... [19:04] adam_g, upgrade --force [19:04] hazmat: it will increment the revision where? in the ./precise/foo/revision, or in the juju environment? [19:04] foo/revision afaicr [19:04] hmm [19:04] adam_g, what's the issue again? [19:05] i basically just need juju to ignore the local revision if at all possible, or reset it in its state [19:05] you have a rev 5 in the env, and rev 3 on disk? [19:05] hazmat: potentially, yeah [19:05] adam_g, whats the scenario that arises in? [19:06] adam_g, and why couldn't you just increment the revision? [19:06] currently MAAS only supports precise, i need to deploy and test services on both precise quantal and precise. to do this, i need to switch the installation between releases in cobbler. from juju's POV, everything is being deployed to precise. i have two charm branches for each service (one for precise, one for quantal). when i deploy, i branch them into ./precise and 'juju deploy --repository=. local:foo'. i'd like to not have to re-bootstrap every deploy [19:07] adam_g, there different charms [19:07] hazmat: yeah [19:07] adam_g, the series is part of the charm name [19:07] oh.. [19:07] gotchqa [19:07] hazmat: in this case, both charms look the same [19:07] precise/nova-compute [19:08] adam_g, why not tell juju quantal/nova-compute ? [19:08] between deploys, i replace that charm with the quantal version [19:08] ie. why try to make two separate charms both pretend to be the precise? [19:08] hazmat: does 'deploy local:quantal/nova-compute' work? i thoguht it resolved the directory based on the series specified in environemnets.yaml [19:08] adam_g, it resolves the *default* series [19:08] adam_g, but you can specify any series when deploying [19:09] in the absence of one specified, it uses the default [19:10] hazmat: specify the series how? as an argument to deploy? either way, i think MAAS provider will refuse anything other than precise ATM? [19:10] unless specifying it at deploy time is independent of the provider [19:11] =o [19:11] adam_g, its specified as part of the charm name so local:quantal/nova-compute should do it [19:11] hazmat: er, yea [19:11] Bad 'ubuntu-series' constraint 'quantal': MAAS currently only provisions machines running precise [19:11] adam_g, it needs provider support though [19:12] lame [19:12] so im back at square one. :) [19:12] yup [19:12] but with context its so much better ;-) [19:12] * m_3 grimace [19:12] adam_g, so the snapshot script is what you want [19:12] hazmat: great. ill take a look after lunch [19:13] hey SpamapS gmb mentioned you were wondering whether the python charm helpers would be helpful. I'm hoping they still are. I'm proposing that we write a Launchpad dev charm, so I'm selfishly interested, but also I thought we were all pleased that they were good tools for charmers in the future. [19:13] maybe I misunderstood though [19:13] adam_g, its in pypi.. so its an easy_install charmrunner away.. [19:13] also in lp:charmrunner [19:13] sweet, thanks kapil [19:14] gary_poster: think they'll be generally useful... peeps were just balking at deps that weren't packaged [19:15] m_3, cool thanks [19:15] hmm.. it does look like snapshot has some local provider specifivity.. [19:15] gary_poster: so I think we'll package python-shelltoolbox or something like that and mark it a dep of charmhelpers-py [19:15] we don't abstract out removing files from storage... [19:15] :-( [19:16] hazmat: shouldn't be too hard to extend it... it takes care of the zk part [19:16] cood, m_3, yeah that's what I was hoping [19:16] i don't think that should be a problem since its killing the zk state for the charm [19:16] thank you [19:16] gary_poster: np... sorry it sat in the queue for so long! [19:17] :-) will be very happy when it is all set up === lifeless_ is now known as lifeless === Leseb_ is now known as Leseb [21:25] m_3, SpamapS, i just proposed lp:~jimbaker/juju-jitsu/watch-subcommand which adds a jitsu watch subcommand [21:27] it's a superset of what can be done with polling juju status and parsing; in particular, one can wait for relation settings to be set (or unset). so that should be useful for testing. also it uses the Juju API where possible, otherwise ZK, so it's much lighter weight/instaneous [21:29] jimbaker: cool... thanks! [21:33] m_3, one other cool thing is that multiple conditions can be specified, with all or any waiting. also being able to wait for --num-units of a service to be in a state, along with some set of relation settings, should hopefully prove useful [22:10] negronjl, cassandra charm exposes jmx as a 'cassandra' interface ? that's not right [22:10] hazmat: I'll look into it ... would you open a bug and assign it to me? I'm swamped at the moment and don't want to forget about it. [22:11] negronjl, np [22:11] hazmat: thx