[11:00] <gnuoy> Is juju-core 1.16 from the the stable ppa missing  maas-tags support ? juju with "--constraints maas-tags=compute" returns
[11:00] <gnuoy> error: invalid value "maas-tags=compute" for flag --constraints: unknown constraint "maas-tags"
[11:08] <TheMue> gnuoy: did you tried simple "tags=..."
[11:08] <TheMue> gnuoy: ?
[11:09] <gnuoy> TheMue, I did not, I went by the wiki
[11:09] <gnuoy> TheMue, is seems happy with that, thanks !
[11:10] <TheMue> gnuoy: yw
[12:12] <Dotted> how do you get the service name for use in hooks?
[12:33] <mthaddon> hi folks, the local provider on doesn't seem to be respecting the "default-series: precise" for 1.16.0-0ubuntu1 - is that the correct parameter?
[12:36] <mgz> mthaddon: that's the default anyway. it's giving you your machines ubuntu version instead?
[12:36] <mthaddon> mgz: er, nm, I'm an idiot - the bootstrap node is saucy, but any provisioned nodes are precise
[12:36] <mgz> right.
[12:37] <mgz> the "bootstrap" node in the local provider case is just your machine :)
[12:37] <mthaddon> yeah...
[12:41] <mthaddon> mgz: agent-state-info: '(error: container "mthaddon-local-machine-1" is already created)' <-- any ideas what I can do about this (I had to do some pretty heavy surgery recently when I was migrating to an encrypted home dir, may have killed/rm-ed some things I shouldn't have)?
[12:43] <mgz> mthaddon: destroy environment, then cleanup any lingering lxc containers should do it
[12:45] <mgz> mthaddon: see bug 1227145
[12:45] <_mup_> Bug #1227145: Juju isn't cleaning up destroyed LXC containers <local> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1227145>
[12:45] <mthaddon> mgz: should I remove the "mongodb" lxc container? not sure if that one's related to juju
[12:46]  * mthaddon has destroyed mthaddon-local-machine-1
[12:46] <mgz> we don't put mongo in a container
[12:46] <mgz> so, can leave that one.
[12:47] <mthaddon> k, thanks
[13:12] <marcoceppi> rick_h_: I do now
[13:12] <marcoceppi> rick_h_: if it's not too late
[13:12] <rick_h_> marcoceppi: cool, I've got my api proofing branch in review. I had some question re: opinions on api output and got some second opinions from another person on the team
[13:13] <marcoceppi> rick_h_: cool
[13:13] <rick_h_> marcoceppi: I'll see what comes out of review and then get your docs/details. If you've got any feedback then we can go back and tweak.
[13:33] <jose> marcoceppi: ping
[13:33] <marcoceppi> jose: pong
[13:33] <jose> hey, are we having any more charm school on air sessions?
[13:35] <marcoceppi> jose: we should, there might be one this Friday. jcastro?
[13:48] <mthaddon> marcoceppi: any chance of a trivial review of a charm for me? it's a config documentation update only - https://code.launchpad.net/~mthaddon/charms/precise/nrpe-external-master/non-root-checks/+merge/191179
[13:51] <marcoceppi> mthaddon: I'll take a look
[13:52] <mthaddon> thx muchly
[14:02] <mthaddon> marcoceppi: cool, thank you
[16:59] <kurt_> Can someone tell me when updating the agent via juju upgrade-juju - is the switch for "--upload-tools" just circumventing the "sync-tools" process?  If I do a sync-tools after upgrading the juju binary, is using "upload-tools" unnecessary?
[17:00] <mgz> kurt_: you should not be using upload-tools
[17:01] <mgz> it's a developer option, for when you're building from a local source copy of juju-core
[17:01] <kurt_> mgz: I have it from older notes.  Ok, thanks.
[17:37] <rick_h_> marcoceppi: so http://paste.mitechie.com/show/1045/ is what things are looking like.
[17:38] <rick_h_> marcoceppi: first branch is landing, will be updating/etc the next two over today/tomorrow. Should be able to start hitting staging.jujucharms.com and testing stuff out in a day/two
[17:38] <rick_h_> marcoceppi: let me know if any of that doesn't make sense or you're not a fan of
[17:41] <marcoceppi> rick_h_: what if there are multiple things with a service?
[17:42] <rick_h_> marcoceppi: so right now there's plans for two
[17:42] <rick_h_> marcoceppi: can we find it
[17:42] <rick_h_> marcoceppi: and does the config look ok for it
[17:42] <rick_h_> marcoceppi: if we cna't find it, then there's nothing to do for config since we can't check it
[17:42] <marcoceppi> rick_h_: multiple things, being multiple errors
[17:42] <marcoceppi> rick_h_: or does it just stop when an error is found
[17:43] <rick_h_> marcoceppi: so right now there's a single message and you rerun proof as you fix things
[17:43] <rick_h_> marcoceppi: well it'll run through as much as it can. But there will be some level of 'run again' unless we make things more complex.
[17:44] <rick_h_> I'm already a bit unhappy with the nested level of stuff. It seems like it should be simpler, but with multiple bundles per file, relations vs services, etc it has to be more complicated
[17:44] <marcoceppi> rick_h_: hum, this'll work for a first ver
[17:45] <marcoceppi> rick_h_: Ideally I'd like to tell the user everything that's wrong up front, so they can fix it all without having to do things in steps
[17:45] <rick_h_> marcoceppi: so you're saying if 3 config fields are bad. That's the stuff I'm doing now. I wanted to keep it consistent with not found but you're right. As I'm doing tihs only the last config error will be retruend :/
[17:46] <rick_h_> marcoceppi: so yea, I could do something where the message was generic "Service config failed to validate" and an object of "field": "message on issue" in another key?
[17:47] <rick_h_> I just feel that parsing this on your end is going to be a bear.
[17:48] <marcoceppi> rick_h_: I really just want a message for the user, so you could have the message say "Service config failed to validate: key, ..."
[17:48] <marcoceppi> rick_h_: at the end of the day, I'm going to just show the message to the user
[17:49] <rick_h_> marcoceppi: ok, I had hoped the debug info would be useful as maybe a -v flag or something.
[17:49] <rick_h_> if the config is failing to validate because we're looking under the wrong charm I feel like that's important debug info
[17:49] <marcoceppi> rick_h_: so, probably? Possibly having an extra key per bundle name, say "output" or "messages" with a formatted message of "<service>: full message to user"
[17:50] <marcoceppi> rick_h_: thn you can collect the debug info as you see fit and I can find better ways to expose it as a -v flag
[17:50] <marcoceppi> though, right now there is no concept of increase verboseness on proof
[17:50] <rick_h_> marcoceppi: ok, thinking/will look at adjusting it.
[17:51] <rick_h_> marcoceppi: thanks for looking it over and for the feedback
[17:51] <marcoceppi> rick_h_: np
[17:51] <marcoceppi> rick_h_: glad you're doing it and not me :)
[17:52] <rick_h_> marcoceppi: on the validating of config, do you happen to know if juju will do type coercion?
[17:52] <rick_h_> marcoceppi: e.g. if an int is passed to a string config field juju will reject, cast to string, or just ignore?
[17:53] <marcoceppi> rick_h_: I have no idea, rogpeppe hazmat ^
[17:54] <rick_h_> marcoceppi: meh, since we're proofing might as well play it as strict as possible.
[19:08] <Guest62958> Hey guys, I just lost contents of my home ~/.ssh and I think that that caused the bootstrap keys to be lost. I can ssh to the bootstrap (or any host) using my user's maas keys...
[19:09] <Guest62958> But juju can't communcate. I'm trying to avoid having to bootstrap again
[19:09] <Guest62958> I can run juju-status, but deploy or terminate machine won't work
[19:11] <marcoceppi> Guest62958: what version of juju are you using?
[19:12] <Guest62958> 0.6.1
[19:12] <ahasenack> hi, can someone mark this bug as "low" for me please? https://bugs.launchpad.net/charms/+source/landscape-client/+bug/1235281
[19:12] <_mup_> Bug #1235281: Landscape-client charm does not distribute SSL cert to clients for LDS installs <landscape-client (Juju Charms Collection):In Progress by ahasenack> <https://launchpad.net/bugs/1235281>
[19:18] <Guest62958> It has something to do with this ControlMaster setting in ssh
[19:19] <Guest62958> and ~/.ssh/socket-%r@%h:%p
[19:19] <Guest62958> probably used as ssh identity
[19:20] <gary_poster> hey jcastro, could you escalate https://code.launchpad.net/~hazmat/charms/precise/hadoop/trunk/+merge/191278 in the charm review list, please, if that's reasonable?  The charm is broken in core and gui atm without these changes
[19:21] <jcastro> hey marcoceppi, got time to review this asap? ^^^
[20:02] <marcoceppi> jcastro gary_poster: yeah
[20:04] <gary_poster> thank you marcoceppi
[20:11] <omgponies> nothing happens when I run shift-D in the juju gui ...  it used to work,  but today... .no.   Has the charm changed recently ?
[20:14] <gary_poster> omgponies, hm.  hasn't changed in at least a week or so.  This is standard charm, not changing the config to use trunk or anything right?
[20:15] <omgponies> correct,  just ran 'juju deploy juju-gui'
[20:15] <gary_poster> omgponies, does the export button on top right work?  looks like an open box with an arrow coming out?
[20:15] <omgponies> tried in both firefox and chrome ...  shift-D does nothing in both
[20:15] <bac> omgponies: i just tried it on comingsoon.jujucharms.com and it works.
[20:15] <omgponies> nope that doesn't appear to do anything either
[20:15] <gary_poster> bac, but that's trunk, not 0.10.1
[20:16] <gary_poster> omgponies, very weird.  I'll try spinning one up.  Can you check the JS error console to see if it is complaining at us?
[20:16] <omgponies> I can cofirm it works for me at comingsoon. also
[20:17] <gary_poster> I wonder if it is a GUI bug specific to the charms you have...
[20:18] <omgponies> I get this in the JS console
[20:18] <omgponies> http://pastebin.com/kKYazJKA
[20:19] <gary_poster> ack
[20:19] <gary_poster> doesn't look related actually :-/
[20:20] <omgponies> figure if there's any javascript error it probably breaks all the javascript
[20:21] <gary_poster> not always, and I'm assuming other things still work?  You can click on a service and see the inspector, and you can browse charms on the left, and so on?
[20:21] <omgponies> a second console error when I shift-D didn't notice it before - http://pastebin.com/X7G9bEnw
[20:21] <gary_poster> ah-ha! that looks more like it
[20:22] <omgponies> yeah I get half-working stuff when I click on the service
[20:22] <omgponies> the inspector comes up but is blank
[20:22] <gary_poster> wow.  ok.
[20:24] <omgponies> hrm,  in chrome it is blank,  in firefox it shows the inspector fine
[20:24] <gary_poster> (this would be a great time to request an export to see if we can dupe :-P )
[20:25] <omgponies> I thought I had one from last week but when I import it it complains ... so I went to build a new one
[20:25] <omgponies> buuuut,  I  do have a .sh that uses the juju cli to build it
[20:25] <gary_poster> omgponies, if you are willing to share that would be awesome
[20:26] <omgponies> https://github.com/paulczar/charm-championship/blob/master/monitoringstack.sh
[20:26] <omgponies> don't steal my contest entry! :P
[20:27] <gary_poster> thanks omgponies.  No worries, I won't. :-)  Am I right in assuming that you used the .sh to start the environment?
[20:27] <omgponies> yessir
[20:27] <omgponies> the gui is painful for doing complicated things ;)
[20:27] <omgponies> plus I don't want to be mistaken for a windows admin if somebody sees me pointing and clicking
[20:28] <bac> ha
[20:28] <gary_poster> :-)
[20:28] <gary_poster> cool thanks a lot omgponies.  I'll try to dupe and get back to you, though prob'ly won't be till tomorrow.  that ok?
[20:28] <omgponies> sure
[20:28] <gary_poster> cool, thanks again
[20:29] <omgponies> I'll do some messing around on my side too .. .see if I can find anything
[20:30] <omgponies> I hope I win the contest ...  I think it's the only way I'll be able to pay my amazon bill trying to get the damn thing to work :D
[20:30] <gary_poster> lol
[20:31] <omgponies> btw, I do have a .yaml file from the other week in that repo ... but the  juju-deployer errors on it -
[20:32] <omgponies> 2013-10-15 15:31:50 Deployment name must be specified. available: ('envExport',)
[20:33] <gary_poster> that sounds like a normal error
[20:33] <gary_poster> I mean
[20:33] <gary_poster> it is just telling you to specify what name you want
[20:33] <gary_poster> I forget option
[20:33] <gary_poster> try --help
[20:35] <omgponies> ahhhh I got it
[20:35] <gary_poster> omgponies, am I making sense?  I don't have deployer hanging around atm but can get it if you need.  though I have to step out in 5 or 10 for awhile
[20:35] <gary_poster> oh cool
[20:35] <omgponies> juju-deployer -c monitoringstack.yaml envExport does it ... I guess the exporter names it as 'envExport'
[20:36] <gary_poster> right.  I think the deployer makes you specify even if there is only one
[20:36] <omgponies> yeah ...   and now I find a bug where juju-deployer doesn't know how to deal with subordinate services
[20:37] <gary_poster> !
[20:37] <gary_poster> hazmat ^^^ ?
[20:37] <omgponies> http://pastebin.com/TbrSyFKx
[20:39] <gary_poster> mmm...may be gui's fault.  I mean, the deployer could be less fragile but...I bet if you omit line 85 from https://github.com/paulczar/charm-championship/blob/master/monitoringstack.yaml it will work, omgponies
[20:40] <omgponies> trying that
[20:41] <marcoceppi> uh, hazmat gary_poster jcastro what happens with an already deployed service if you change the name of all the configuration options?
[20:42] <marcoceppi> seems like that could severely break upgrade-charm
[20:42] <gary_poster> marcoceppi, I suspect you lose them all
[20:43] <marcoceppi> I'm pretty conflicted on this change, I get why it's there
[20:43] <marcoceppi> But seriously, what's the deal with juju not handling periods? first charm names now this?
[20:43] <gary_poster> bcsaller, I see the exact same qa issues as before.  well, wait, maybe I haven't merged recently enough
[20:44] <gary_poster> marcoceppi, it's an issue from not protecting ourselves from mongo enough
[20:44] <bcsaller> gary_poster: I hope not :-/
[20:44] <omgponies> you can never protect yourself from mongo too much
[20:44] <gary_poster> heh
[20:45] <ryanc> Hello all.  I'm looking for anyone that has done work modifying nova-compute and/or quantum-gateway charms to use a separate network interface for quantum networks/OVS instead of piping the GRE tunnels over the default interface
[20:45] <marcoceppi> gary_poster: so this is an issue with just juju-core? I'm inclined to say this is a bug in juju-core to be sorted. This charm's existed before 1.X and to my understanding has worked back in the ZK days
[20:45] <gary_poster> ooh better bcsaller.
[20:45] <marcoceppi> gary_poster: breaking deployed version of th charm to address a juju/mongo issue doesn't seem like the solution
[20:46] <gary_poster> marcoceppi, bug is in core and gui.  I think this is a practical resolution for an immediate problem
[20:46] <gary_poster> bcsaller omgponies marcoceppi I have to run now.  will return later.  bcsaller so far so good
[20:46] <bcsaller> great, thanks
[20:46] <marcoceppi> gary_poster: it will break deployments for anyone who has this deployed and runs upgrade charm, not a very practical solution imo
[20:46] <marcoceppi> gary_poster: will post to the merge request
[20:48] <omgponies> marco: could you mark in the description of the old config names that they're depreciated,  and then have upgrade-charm look for them having values and config-set the new config names to the same value ?
[20:49] <marcoceppi> omgponies: not really, juju can't see descriptions. It seems like this is only broken in juju-core. I guess we can safely merge this since technically < 0.7 is deprecated already.
[20:50] <hazmat> omgponies,  gary_poster, marcoceppi deployer definitely knows subordinate services.
[20:50] <marcoceppi> and you'd need > 1.0  to actually deploy and change config for this charm
[20:53] <marcoceppi> hazmat: omgponies it definitely can handle subs, amulet makes heavy use of subs and deployer
[20:53] <marcoceppi> omgponies: what problems are you seeing?
[20:53] <omgponies> It actually looks to be a bug with the GUI export
[20:53] <marcoceppi> :)
[20:54] <omgponies> here's the problem line in the export  .yaml - https://github.com/paulczar/charm-championship/blob/master/monitoringstack.yaml#L85
[20:55] <hazmat> there are a couple of other issues with the gui export
[20:55] <hazmat> it exports default config options as though they were explicitly set
[20:55] <omgponies> I removed that line and am re-running to see if it still deploys fine
[20:58] <hazmat> but those shouldn't be problematic for deployer usage
[21:00] <omgponies> here's my deployer output - http://pastebin.com/TbrSyFKx
[21:01] <omgponies> with the error I get
[21:02] <hazmat> nice the config value export seems to fixed on trunk,  nice
[21:03] <hazmat> omgponies, sorry talking about a different issue if you remove line 85 and 93 from that export you should be good
[21:04] <hazmat> omgponies, also re elasticsearch.. i'd suggest fixing that charm to not make it ec2 only
[21:05] <omgponies> it's not ec2 only ...  the ec2 portion is to allow it to use ec2-auto-discover for clustering
[21:05] <omgponies> because it wants to use multicast by default
[21:07] <omgponies> running juju-deployer ... it never seems to exit ...   is this on purpose?  or does it suggest somthing else whacky in the .yaml ?
[21:07] <marcoceppi> omgponies: is it outputting?
[21:08] <omgponies> yeah it did a bunch of  'Deploying server .....' lines and then has been sitting for 15mins
[21:08] <omgponies> with nothing else
[21:10] <hazmat> omgponies, i always run it with -v -W and it will tell you what its doing / waiting on
[21:10] <omgponies> ahhh k, will run again later and see
[21:10] <hazmat> it has some built-in waiting for various things to happen and timeouts as wel
[21:12] <hazmat> omgponies, multicast is available by default in most of the public clouds (hpcloud, rackspace, google, etc)
[21:12] <hazmat> er.. isn't
[21:12] <hazmat> its pretty easy to setup the discovery with a peer relation
[21:13] <omgponies> yeah I know ...  EC2 discovery plugin makes it easier ... so I targeted that first
[21:14] <omgponies> I'll have some stuff in there to do unicast discovery,  but I haven't had the chance to test it properly
[21:15] <omgponies> in the config.yaml of elasticsearch :-  zenmasters
[21:16] <omgponies> description field explains how it works
[21:19] <omgponies> I'll probbly do something like the way hadoop or mysql charms set roles via `service groups`
[21:19] <omgponies> deploy elasticsearch:master,  deploy elasticsearch:slave,   deploy elasticsearch:nodata, etc
[21:20] <hazmat> its not really nesc.
[21:21] <hazmat> it only exists to do introductions, juju can do introductions for you
[21:22] <hazmat> ie. a peer relation, and any unit of elasticsearch knows the addresses of all other nodes
[21:26] <omgponies> elasticsearch only needs to talk to one member of cluster which will then tell it about all over members.    common pattern is one or two 'masters' which you set as zenmasters in the configs for all other nodes and they handle the introductions.    setting organizational groups allows for further breakdown of clustering ... for instance if you want to make it rack aware,  or you want to make a search node that holds no data of its own
[21:26] <omgponies>  ... you can create an organizational group and then have config settings for that group to create the elasticsearch config required to make it perform in the correct way
[22:09] <hazmat> omgponies, the primary purpose is just a way to get addresses of the other es nodes.