[00:00] i agree current approach of wrapping cli lxc commands is a bit ugly. but I thought lxc can take care of all these issues already [00:00] i mean autorestart, firewall, network rules -- libvirt offers all this stuff for lxc containers, no? [00:00] lxc is a hack, it should die in a fire [00:18] lxc technology appears to mature slowly, it is not that bad. also, now if one wants to install openstack with juju there are only two options I'm aware of: maas which requires 10(!) physical hosts to deploy because it will deploy each service on a separate physical hosts which appears to be ridiculous to me. the second option is lxc. it lets deploy several openstack services on one host, which is nice I think. [00:20] it is that bad, if you want a seperate them use real virtualization that already does all this for you, esx , vbox , to name a few, not a bsd jail on crack thats hacked to do some of the same things [00:20] and there is dev stack [00:21] as well as many real virtualization products [00:21] note that I was talking about using juju to bootstrap openstack [00:21] do you suggest to deploy openstack components in vbox? [00:22] i dont sugest anything other than lxc is a hack that needs to die in a fire, it may have its uses but for our usecase there are already things mature that dont require re-coding to do something they were never intended to do [00:23] J [00:24] how do you interrogate a specific relation from a charm hook? [00:24] (I want to see if the zk relation is setup in opentsdb in the start hook) [00:24] relation-get ? [00:25] so, in the zookeeper-relation-changed hook, I do zk_host=`relation-get private-address` [00:25] yup [00:25] but that is missing the context in the start hook. [00:25] lifeless, imbrandon: thanks for the feedback [00:25] isn't it? Or am I misunderstanding teh whole thing [00:26] right, you only have access to it in the relation* hooks [00:26] so thats my quesiton [00:26] imbrandon: btw you were a bit harsh on hokka :( [00:26] ahh as far as i konw you dont [00:26] or cant [00:26] sorry :( [00:27] SpamapS: ^ your review, how can I do this thing? [00:28] lifeless: yea it just pains me everytime someone tries to use lxc, we put it as something its not and get their expectations high [00:28] but yes i probably was, we just need a real local provider like you said [00:28] imbrandon: sure, problem I saw was that hokka was trying to solve a problem, and while you're accurate, it didn't help them. [00:28] sorry, overlapped with you there. enough said. [00:28] :) [00:30] but yea as far as the relation stuff , i dont think there is a way iirc, other than some questionable measures [00:30] like using the juju charm from the cli of a hook [00:30] or similar [00:51] SpamapS lifeless hazmat : mmmm i wish there was an interactive shell that simulated a charm env where i could fire hooks in the corect context at will [00:52] that would make creating new charms so much easier [00:52] know if there is a way i could simulate that now that i'm overlooking ? [00:53] * imbrandon is finishing nginx today [00:53] is software ever finished ? [00:53] heh true, i feel its not, just like websites, your never "done" [00:54] lifeless: ok let me rephrase , puttin nginx in a state that others can make use of , hopefully :) [00:54] :) [00:54] \o/ [00:54] of course, have to ask why that matters, nginx after all (/wink) [00:55] heh [00:55] I wish they would choose less disruptive defaults [00:55] I keep having to support folk who get bitten by their Vary/compression defaults. [00:55] i'm trying out some intresting common lib approach , take a peek [00:55] yea [00:55] i reset those right off [00:56] http://bazaar.launchpad.net/~imbrandon/charms/precise/nginx/trunk/files [00:57] about to dump the default configs in /templates and use preg_repalce on ##place-holder-values## [00:57] is my next step [01:07] lifeless: http://bazaar.launchpad.net/~imbrandon/charms/precise/nginx/trunk/view/head:/templates/nginx.conf [01:07] thats what i use as the base, seems to work well [01:10] do you export the log data as relations ? [01:10] not yet, but i had considered it [01:10] for like loggy or something to be used [01:11] i'm just now adding in the shared mount stuff [01:12] for nfs, before it did not have that, actually this was all one large charm before, trying to find the break points into smaller charms is the key [01:12] as in juju deploy website , and the one charm did EVERYTHING, thus breaking it out into nginx , website, database , logs , mount etc etc [01:16] actually, i just realized something ... /me goes to shuffle a little code ... [01:18] imbrandon: right [01:18] I'm poking at logstash atm [02:23] lifeless, nice, think there is an extant attempt at it, but its not very good.. http://jujucharms.com/search?search_text=logstash [02:23] yup [02:26] (thanks - I did already know but I appreciate the hint anyhow) [02:27] http://15.185.225.6/ [02:27] heh working state now [02:28] few more cleanups and it will be ready for review i think [02:28] actually might be now /me looks [02:29] hrm nope, one more thing [02:30] hazmat: do you have any idea on the relation-get thing ? [02:31] * hazmat scrolls back [02:31] hazmat: in my start hook, I need to check that a specific relation exists. [02:32] lifeless, relation-ids [02:32] lifeless, allows you to check for instances of a relation given a relation name [02:32] (and then pull out the data from it) [02:32] which can then be passed to relation-get/relation-set/list [02:33] it was mainly meant for upgrade contexts, but its useable in any non relation hook context [02:33] imbrandon, lxc is a not hack [02:33] hazmat: so, if [ -n "`relation-ids zookeeper`" ] ... ? [02:33] although the juju implementation of local provider could perhaps use that moniker [02:34] for what we;re using it for it is, its not a true virtualization container [02:34] hazmat: lxc is -very- hairy at the moment. Even with all the work we're putting into it. [02:34] * hazmat remembers the beatles [02:34] its getting better all the time ;-) [02:34] hehehe [02:35] lifeless, very hairy? its not root secure, what in particular is hairy? [02:35] * hazmat checks rel-id syntax [02:35] i still think that a true xen/vbox/umode container would be better [02:35] hazmat: it depends on the entire kernel being properly namespaced, we've had a raft of bugs where that isn't the case. [02:35] hazmat: e.g. powering off the machine from within lxc [02:35] hazmat: attempting to insmoding 32-bit modules into a 64-bit kernel from within a container. [02:35] lifeless, indeed, but some of those are mitigated with app armor [02:36] hazmat: yes, but think structurally. [02:36] lifeless, yes.. there's a lot of surface area there [02:36] and there many things not properly namespace'd [02:36] hazmat: the dependency stack for lxc to be secure in and off itself, is huge. And *known* (not speculated) to have un-upgraded non-namespace aware code. [02:36] but it has been done correctly [02:36] with openvz [02:37] has anyone written a json api presenting ec2-like semantics (just enough for juju in particular) for either libvirt or lxc itself ? [02:37] lifeless, sure.. they call it openstack [02:38] lifeless, i think that's code looking for a problem to solve.. [02:38] it may be conceptually nice [02:38] but its overkill imo [02:38] we do need to refactor the lxc provider [02:38] but thats mostly to just drop libvirt [02:38] since lxc in precise does network [02:38] and to switch to lxc ubuntu cloud img [02:38] hazmat: it would provide an avenue to have smaller code base. [02:38] so we can init with cloud-init the same as other setups [02:39] lifeless, perhaps.. i'm not so sure [02:39] which is a good thing; would let the local provider just be an api consumer, the hairy stuff could then reuse bits of openstack, or standalone, as appropriate. [02:39] lifeless, getting to cloud-init and the code is already minimal [02:39] lifeless, and much less software into the upstream stack [02:39] hazmat: I suspect there are a large raft of optimisations you're not well placed to make at the moment. [02:39] lifeless, wrt to lxc? [02:40] hazmat: such as layered fs's on top of the base image [02:40] * hazmat nods [02:40] which a local provider daemon could take care of more easily. [02:40] btrfs snapshots would be butter :-) [02:40] (which btw would shrink the footprint substantially for whatever is in the 'image') [02:40] hazmat: uhhg, inappropriate ;) [02:40] well not for code, but operationally it would be much nicer [02:41] that's one of my concerns about moving to lxc everywhere [02:41] if we have to wait for machine bootstrap and then download an lxc image all over again [02:41] it would be nicer if we could use the root fs a base in a stable secure fashion [02:42] right, which you can, if you get intimate with lxc [02:42] which makes juju less portable [02:42] -> break the dependency, provide a crisp clear boundary, and let folk like imbrandon write local ones for their OS. [02:42] lifeless, at the moment i don't really regard local provider as portable anyways.. [02:42] and the stuff on the other side of the boundary can get as awful as needed. [02:42] much less juju [02:43] definitely we want to target the latter towards some notion of portability, but how that stretches is still up in the air [02:43] sure; my main point is to separate the concerns. [02:43] When you describe juju, adding 'and it knows how to xyz local containers' in doesn't fit with the main thrust. [02:45] lifeless, if its using cloud init for containers, then the api wrapper around lxc is going to be about the same as an api wrapper around some other provider that's facilitating the same, at least till it goes deep on features, in which case yeah.. it would be nicer to have it external [02:46] exactly. [02:46] lifeless, my concern on the latter is how close it may be getting to something like openstack, ie what's the scope limitation on that [02:46] the difference between 'this is how you call lxc command line' and 'this is an API you can use', is that an API you can use can be used from within a container. [02:46] which solves your zookeeper-outside issue [02:46] its similar to openstack in the same way MAAS is similar to openstack. [02:46] lifeless, that's not really an issue re outside zk [02:47] lifeless, that was by gustavo's choice.. [02:47] originally lxc for juju was modeled as machines instead of units [02:47] er. implemented as a contributed patch by SpamapS [02:47] yes [02:48] I spent some time tweaking it [02:48] anyhow [02:48] we could go around this indefinitely. [02:49] I appreciate there are some choices in here; some of them make less sense to me than others, and there isn't sufficient explanation for me to be able to agree or disagree with the /why/. [02:49] lifeless, if the api is around i'd be game for incorporating, but its not something we can do ourselves given priorities atm [02:49] Sure, never suggested you should :) [02:49] I figured the channel might know if someone somewhere had done one. [02:50] hazmat: what makes me sad right now is the ip using patch was rejected. [02:50] So, I'm running a fork, and probably will be forever. [02:50] lifeless, well... [02:50] lifeless, openstack native provider would work just as well [02:50] hazmat: it uses ip addresses [02:51] exactly as I proposed. [02:51] lifeless, exactly.. [02:51] just a different provider. [02:51] So I don't understand why its acceptable in one provider and not another. [02:52] lifeless, because in the ec2 case, the most common use is public, and public addresses there are... [02:52] i dunno.. you'd already convinced me ;-) [02:53] yah [02:53] bedtime for me, one more merge to do [02:55] gnight hazmat ( when ya head out ) [04:10] ok headed to sleep [04:10] SpamapS: https://bugs.launchpad.net/charms/+bug/994699 [04:10] <_mup_> Bug #994699: Charm Needed: Nginx < https://launchpad.net/bugs/994699 > [04:11] its in a functioning state, would LOVE some input / patches / code snipits / review / etc etc, i think its ready for more than me to work on now, basics are laid [04:11] * imbrandon heads to sleep [04:12] ( note: I want to make setting the values for template:relace more elegant but i need fresh eyes in the morning maybe ) [04:13] template::replace* [05:02] imbrandon: I'll take a gander [05:24] lifeless: btw, imbrandon told you wrong. You can access any relation-* command from any other hook. You want the relation-ids command. [05:25] lifeless: if you are not in a *-relation-* hook, you need to be more explicit is all [05:25] SpamapS: thanks, hazmat set me in the right direction, though I haven't dug around yet to figure it all out [05:25] ok good [05:25] I skimmed the backscroll but missed that bit I guess [05:25] seems a shame I can't just say relation-get relation=zookeeper [05:25] 14:31 < lifeless> hazmat: in my start hook, I need to check that a specific relation exists. [05:25] 14:32 < hazmat> lifeless, relation-ids [05:25] the problem is zookeeper might have more than one thing related to it [05:25] etc [05:26] SpamapS: from within the context of a hook on opentsdb ? [05:26] SpamapS: how is that different to being within the context of the zookeeper-changed hook of opentsdb ? [05:26] yeah, you might say 'add-relation opentsdb zk1' and 'add-relation opentsdb zk2' .. [05:27] lifeless: whether thats valid or a good idea is not for juju to say. but in many cases (mysql db relation) its totally valid [05:28] lifeless: when you're in a *-relation-* hook, the relation ID is implied, you have a $JUJU_RELATION_ID even. [05:28] mmm [05:28] lifeless: but if you're in any other context, you need to figure out what relation id you want to inform/inspect [05:28] so this needs to be written up somewhere [05:29] its really hard to discover let alone work with [05:29] yeah it only landed in early April [05:29] its used in several charms already but needs to be an explicit chapter in the docs IMO [05:29] what would be most awesome would be for someone to figure out the tasks charm writers need to accomplish, and make that really easy. [05:29] like "Advanced Relations" [05:30] well, pretty much every charm I can think of needs a start that starts if the relation is already there, apparently. [05:30] hm good point [05:31] lifeless: I suspect we'll find declarative ways to do all of this before too long [05:32] lifeless: Since juju is an event engine, we should be able to write a pretty simple state machine to drive charms [05:33] some simple charms might start when there are no relations, but I suspect they are all done: DB's, proxies and web servers. [05:34] anything *interesting* needs data to act on :) [05:36] lifeless: indeed, tho thus far I know of no charms which actually do make sure they have their relations before start [05:36] SpamapS: well, opentsdb *can't* start until it has it :) [05:37] SpamapS: ditto hbase [05:37] hbase has to have zk and hdfs to do anything; pretty sure it only starts when all the configs are there, and start does $nothing [05:37] lifeless: [ -f /etc/opentsdb/required.thing ] && service opentsdb start || echo not ready yet [05:48] SpamapS: there is nothing written to disk though :) [05:48] its all in zk [05:52] lifeless: the charm would keep that state [05:52] lifeless: the relation-ids+relation-get approach is just a way to do it w/o a file on disk [06:03] * SpamapS pokes at a nice generic monitoring interface [06:04] EvilMog: btw, I couldn't get john+mpich2 to work.. just seems to error out all over the place. :-/ [06:04] yeah [06:04] its a pain in the ass to get going [06:04] only code I ever get to work is the older zeroshell patch [06:04] but the jtr authors claim it works [06:04] may have to go john openmpi [06:05] I may get them to join this channel and chat with you [06:06] EvilMog: charm "works" so you can try it too.. it just fails with assertions when you actually try to do anything [06:06] yeah [06:07] I get the same issue with the recent code [06:07] which is why I may try it with openmpi [06:07] instead of mpich2 [06:07] makes sense [06:07] http://openwall.info/wiki/john/parallelization [06:08] yeah I used that [06:08] and the 10.04 guide you posted [06:08] one common problem with jtr + mpich is not having clock synch'd, and not having ssh keys to the whole cluster [06:08] yeah [06:08] and the hosts files [06:08] I have ssh to whole cluster, thats easy w/ juju [06:08] clock skew might have been an issue, I did not check [06:08] I know code that works, but its older base [06:09] http://www.bindshell.net/tools/johntheripper.html [06:09] http://www.bindshell.net/tools/johntheripper/john-1.7.3.1-mpi8.tar.gz [06:09] specifically [06:09] and that one I know works with mpich2 [06:10] bert@ev6.net is the guy you want to talk to [06:10] he wrote the original mpi code [06:13] btw I really appreciate it [06:16] ftp://ftp.openwall.com/pub/projects/john/contrib/parallel/mpi/MPIandPasswordCracking.pdf [06:18] again thats for the older bindshell implementation though [06:24] EvilMog: cool.. I'll poke at it another time when I'm not super tired [06:24] no worries [06:24] * SpamapS passes out [06:24] and no rush [06:24] my new cluster won't be online for another month [06:25] the other option is https://github.com/ccdes/clortho/blob/master/README === Leseb_ is now known as Leseb === almaisan-away is now known as al-maisan === al-maisan is now known as almaisan-away [09:51] <_mup_> juju/trunk r552 committed by kapil@canonical.com [09:51] <_mup_> [trivial] remove old docs tree, docs are now @ lp:juju/docs [09:51] hola ! I m playing with maas. juju bootstrap fire up a node, but my maas machine cannot resolve node-000077770001.local [09:51] it can resolve node-000077770001 though. [09:51] but when i do a juju status, it try to connect to the name.local one , and as it cannot resolve this name, i m stuck [09:51] anyone got an idea what i might have been doing wrong ? [10:17] melmoth, maas is returning the .local name [10:17] yeah. I try to remove it manually with the web page that let you edit nodes names. [10:17] and its not resolvable to your client..its really shouldn't be returning an mdns name [10:17] seems to work [10:18] melmoth, cool [10:49] hazmat: that doc request was pending in queue 4 ages :) [11:08] koolhead11, yeah.. the charm reviewers queue worked out so well, i put one together for core, and spent a good chunk of yesterday clearing it out [11:09] cleared out like 12 branches yesterday [11:09] down to 6, mostly mine though, http://jujucharms.com/tools/core-review-queue [11:10] now to work through the openstack branch [11:11] koolhead11, the new doc as a separate branch should help make doc changes go much, much faster (based on evidence to date) [11:56] hazmat: thanks. === fenris_ is now known as Guest20501 === almaisan-away is now known as al-maisan === al-maisan is now known as almaisan-away === zyga is now known as zyga-food === zyga-food is now known as zyga [14:40] hazmat: go man go! Nice job on the merges the last 24 hours :) [14:42] SpamapS, thanks [15:20] james_w: Hey, I'm working on enhancing nagios, nrpe, and monitoring in general. Did you ever go much further than lp:~james-w/charms/precise/nagios-nrpe-server/trunk ? [15:20] SpamapS, not outside my head [15:21] I think something weird is going on, but I'm really not sure. [15:21] james_w: ok, I have some solutions for your nrpe.cfg issues [15:21] It seems like the juju agent never starts on a machine that is numbered 5 [15:21] tedg: Are you saying "There's something happenin here, and what it is aint exactly clear" ? [15:22] It always gets to the state where the instance is running but the agent is not. [15:22] And it always seems to be machine #5 [15:22] Hmm, maybe because I've continually terminated four? [15:22] tedg, lxc? [15:22] SpamapS, Not sure what I'm saying... :-) [15:23] james_w, EC2 [15:27] Do other folks use "terminate-machine" or am I alone there? :-) [15:27] I mean, and expect to create nodes again, not just as a final clean up. [15:30] SpamapS, it helped hugely to put up a queue page for the core === salgado is now known as salgado-lunch === zyga is now known as zyga-afk === niemeyer_ is now known as niemeyer === salgado-lunch is now known as salgado [17:47] <_mup_> juju/gozk r20 committed by gustavo@niemeyer.net [17:47] <_mup_> Mentioned that the package has moved. [18:45] hazmat: any idea why my nginx isnt showing in the charm queue ( i'm positive its something i forgot to do, and not the queue problem , just not sure what ) [18:46] ahh [18:46] and i take that back [18:46] it is, i was just to fast === philipballew_ is now known as philipballew [18:52] imbrandon, 10m ;-) [18:53] :) [18:54] $config = template::read('nginx.conf'); [18:54] template::write('/etc/nginx/nginx.conf',$config); [18:54] that is just too sexy, now if i can get the rest of the charm so :) [19:15] <_mup_> Bug #1020245 was filed: "terminate-machine" drops two machine numbers < https://launchpad.net/bugs/1020245 > [19:18] imbrandon: can you check your mail and see if HP sent you anything wrt. the HP cloud accounts? [19:18] jcastro: sure one sec [19:19] not that search is turning up [19:21] imbrandon: yeah I think I'll need to mail all of you [19:22] jcastro: why whats up ? [19:22] I think they enabled them [19:22] but I need you to check [19:22] the free 3 months thing [19:22] oh ,i hope so, /me has had instances spun up for about 10 days [19:22] heh [19:25] jcastro: my bill is still -0-'d out so i'm assuming its on [19:27] I wonder when they turned it on [19:27] not sure, yea i kinda assumed it was when u told us about it [19:27] oopsie :) [19:28] hazmat: what's the scoop on the openstack native provider? [19:28] jcastro, i was going to review today [19:28] but got derailed [19:28] hazmat: cool [19:35] jcastro, its the last one in the queue [19:35] jimbaker, ping [19:48] hazmat, hi [19:48] hazmat: that works out, they turned on the free accounts today, so this should give us a nice pool to test from [20:00] jcastro, nice [20:00] jimbaker, you've got an approved branch ready to land fwiw [20:02] hazmat, sounds good [20:02] hazmat, still catching up after being sick for 3 days [20:02] jimbaker, i'd hold for a few an hr though, i've got a trunk issue that i need to fix [20:03] hazmat, ok, just tell me when you're done w/ that === robbiew is now known as robbiew-afk [20:11] hazmat: have we figured out the natty/oneiric build issues yet? [20:12] SpamapS, re format2.. no [20:12] i asked bcsaller to look at it, but not sure if there's any progress [20:12] imbrandon: is OMG on precise or oneiric? [20:23] * negronjl is out to lunch [20:39] hi, trying to get juju running on openstack, and I get this: ERROR Invalid host for SSH forwarding: ssh: Could not resolve hostname server-13056: Name or service not known... any ideas? [20:58] pindonga, this should be a faq [20:58] pindonga, depending on your maas config it may not hand out addresses routable from the client [20:59] pindonga, afaicr you can set the name in maas directly [20:59] hazmat, pm [20:59] its really a maas setup question === salgado is now known as salgado-afk === robbiew-afk is now known as robbiew [22:18] is it possible to have relationships between services in different environments? [22:22] hokka: not yet no, but thats definitely something we'd like to do [22:22] hokka: you can "fake it" with subordinate charms [22:23] hokka: you but you have to manually bring the data from one env to another unless you get really clever :) [22:25] jcastro: yo [22:37] m_3: good morning sunshine [22:38] SpamapS: g'day mate [22:39] SpamapS: and now I can say that and actually know _which_ day too :) [22:39] Friesmurday ? [22:40] Or Sunthednesday [22:40] m_3: trying to tackle the tricky art of a generic monitoring interface [22:41] nagios/icinga are almost *too* powerful for this :-P [22:41] nice [22:41] take a peek at sensu [22:41] m_3, i don't understand all the hype on sensu [22:41] * m_3 likes the possible integration with an underlying openstack install [22:41] its rabbitmq.. [22:41] right [22:41] and lacks a decent frontend afaik [22:42] Nagios has never had a decent frontend [22:42] SpamapS, its like saying cassandra.. its the new monitoring hotness and toss in some adapters [22:42] somehow dominated everybody else with s***ty 1993 style HTML tables [22:42] SpamapS, ichinga FTW [22:42] jk [22:43] I wonder how much of what I'm doing for nagios will translate to icinga [22:43] sensu basically tosses some adapters onto rabbitmq.. and now people treat it like the perfect monitoring solution.. [22:43] thin, scales, adaptable... what's not to like? [22:44] m_3, buts what it do? [22:44] its a log transport [22:44] what do you really need a monitoring soln to do? I want custom metrics [22:45] hazmat: sounds pretty good to me [22:45] this polling stuff is for the birds [22:45] that get where I want them to go... the rest I can handle with other stuff [22:45] sigh.. i could write an amqp adapter for collectd and be equiv [22:45] SpamapS, its still polling [22:45] simple composable tools [22:45] er.. not polling pushing [22:45] hazmat: you could, but you didn't, and they did.. right? ;) [22:45] hazmat: yeah, true [22:46] community of plugins/adapters [22:46] collectd scares me [22:46] well, the _start_ of one :) [22:46] 49 C libs many of which are really crappy [22:49] anyway, what you really want is not a way for your service to say "poll this" but "record this" [22:49] *how* you record that is up to the monitoring system [22:49] hazmat: I rprefer "publishing"... it's lighter weight :) [22:49] hazmat: http://collectd.org/wiki/index.php/Plugin:AMQP [22:50] hot-n-sour soup style... just dump it in... anybody insterested can pick it up [22:51] sorry for the lag... my irssi client's stateside [22:52] dave cheney and I had a hilarious interchange... he's down the street so we had high latency... possibly even round-the-world routes :) [23:01] SpamapS, exactly.. and avoid the overhead of a ruby processes ;-) [23:04] hazmat: +1 [23:04] * m_3 ducks [23:43] jimbaker, can you have a look at this trivial.. fixes trunk http://paste.ubuntu.com/1072207/ [23:44] the new format v1/v2 tests were pretty exact on output (good thing), but the the validate branch, allows for them some of them to be set at least [23:44] re bools and floats [23:44] i'm tempted to back out the whole validate branch though.. [23:46] but considering they couldn't be set previously err, still seems like a win [23:46] er. set from the cli params [23:50] bcsaller, ^ [23:50] hazmat, ahh, that's not good to have failing tests in trunk [23:52] hazmat, even if they were about very picky as to what was the then behavior. in any event, the trivial looks fine to me [23:52] +1 [23:52] that looks fine to me as well [23:53] jimbaker, yeah.. the other branch last one merged in the stack yesterday was pre formatv2 tests [23:53] thanks guys [23:55] <_mup_> juju/trunk r553 committed by kapil@canonical.com [23:55] <_mup_> [trivial] cli config validation compatibility with format v2 [r=bcsaller, jimbaker] [23:56] jimbaker, trunk is green if you want to go ahead with the status-expose [23:56] hazmat, thanks [23:56] jimbaker, bcsaller incidentally i also put together one of those review queue pages for pyjuju.. http://jujucharms.com/tools/core-review-queue [23:57] nice [23:57] bcsaller, if you can merge trunk.. and repropose your branch, i can have a look later this evening [23:57] yeah, cleaning up the others too, there should be more than one [23:59] hazmat, did we ever find out about the build problems on oneiric/natty? [23:59] bcsaller, ^? [23:59] dog walk bbiab [23:59] there are other branching going into review [23:59] my one small attempt to replicate this (by launching a small instance for oneiric) simply suggested that this seemed to be a general problem. but not for the format stuff