[00:00] <m_3> It's sad that a "reaper killing zombies" is the less macabre of descriptions
[00:01] <m_3> the alternative "reaper killing orphans" just has such a nice ring to it :)
[00:02]  * m_3 channeling Scrooge McDuck
[00:07] <marcoceppi> I prefer the latter :)
[00:07]  * marcoceppi wanders off
[08:58] <yolanda> hi all, what's the process to review a charm? we've finished a charm for openerp 6.1
[09:15] <fwereade_> yolanda, File a bug against charms at https://launchpad.net/charms/+filebug . Make sure it has the tag 'new-charm', and a status of "New", "Confirmed", or "Triaged", otherwise reviewers will not see it. If you are working on the charm and not ready for reviews, remove the new-charm tag or mark the status as "In Progress".
[12:39] <bbcmicrocomputer> hmm, I seem to be able to set a boolean service config value using a config file, but not directly on the command line
[12:40] <bbcmicrocomputer> e.g. juju set -e local service boolean=False
[12:40] <bbcmicrocomputer> am I doing something wrong?
[12:43] <bbcmicrocomputer> I get 'Invalid value for boolean: 'False'' if I do it via the command line btw
[12:48] <niemeyer> Hey every body
[12:57] <melmoth> hi there ! When i launch "juju bootstrap", it create a new instance, but without using any predefined ssh key, how can it then log into it ?
[13:18] <avoine> melmoth: it use your ssh key in your ~/.ssh/id_rsa.pub file
[13:18] <melmoth> ok.
[13:18] <melmoth> seems to works. it just appeard magic.
[13:21] <hazmat> bbcmicrocomputer, that's a bug
[13:23] <bbcmicrocomputer> hazmat: I'll file a ticket
[13:24] <hazmat> bbcmicrocomputer, cool, thanks
[13:29] <_mup_> Bug #979859 was filed: Unable to set boolean config value via command line <juju:New> < https://launchpad.net/bugs/979859 >
[13:48] <_mup_> Bug #979879 was filed: juju status fails when machines error on startup <juju:New> < https://launchpad.net/bugs/979879 >
[14:33] <imbrandon> mmm fresh 2 ltr of mt dew sitting on my desk, mmmm good way to start the day
[14:43] <bac> i am getting an error that 'mem' is not a valid constraint for deploy
[14:43] <bac> juju deploy ...  --constraints "cpu=8 mem=68.4G" produces
[14:43] <bac> juju: error: unrecognized arguments: mem=68.4G
[14:44] <bac> m_3, hazmat: want me to file a bug or have i done something wrong?
[14:49] <m_3> bac: I haven't used constraints yet... no idea
[14:49] <bac> m_3, ok, thanks
[14:51] <marcoceppi> I haven't tried the mem constraint, only arch and instance-type
[14:52] <hazmat> odd
[14:53] <bkerensa> jcastro: is the price the same for UDS attendees to go to the cloud summit?
[14:53] <jcastro> I believe so
[14:54] <bkerensa> k
[14:54] <bkerensa> jcastro: I might be interested in going so I will see :)
[14:55] <jcastro> I know where you have an extra $100 to spend!
[14:55] <jcastro> dunno, might just wanna ask marianna when you get there
[14:55] <jcastro> I don't think the audience is going to be for people like us.
[14:55] <jcastro> though I am surely supposed to go, hah
[14:56]  * imbrandon thought about it as well
[14:56] <imbrandon> :)
[14:58] <fwereade_> bac, are you sure you got the quoting right there? the error message looks like the command was trying to interpret "mem=68.4G" as an arg on its own rather than as an element of --constraints
[15:06] <bac> fwereade_: the complete command was:
[15:06] <bac> juju deploy --config=$REPO/$SERIES/buildbot-slave/examples/lpbuildbot.yaml --repository=$REPO local:buildbot-slave --constraints "cpu=8 mem=68.4G"
[15:07] <bac> fwereade_: but i agree, that is what it looks like is happening
[15:08] <fwereade_> bac, would you run that again with juju -v deploy and pastebin me the traceback please?
[15:09] <bac> fwereade_: i will when i my current run finishes
[15:09] <bac> thanks
[15:09] <fwereade_> bac, sweet, tyvm
[15:20] <fwereade_> bac, also, please let me know what version you're running; I can't repro it myself
[15:21] <bac> fwereade_:  0.5+bzr519-1juju5~precise1
[15:24] <fwereade_> bac, thanks
[15:36] <marcoceppi> whoa, I think I was just impressed
[15:36] <marcoceppi> If you terminate a machine outside of Juju, will Juju attempt to re-spin it up?
[15:41] <SpamapS> marcoceppi: I believe the provisioning agent asserts something like that
[15:42] <fwereade_> marcoceppi, yeah, if the instance a machine's assigned to apparently doesn't exist it will spawn a new one
[15:43] <marcoceppi> that's. awesome.
[15:44] <imbrandon> nice
[15:44] <imbrandon> yea thats golden
[15:44] <fwereade_> I think it's done that since... before I joined, actually
[15:45] <marcoceppi> fwereade_: I've never tried, but since I can't connect to a very old bootstrap due to upgrading locally from PPA, I tried to take one of the machines down I knew I didn't use
[15:45] <marcoceppi> then it popped back up a few mins later, to my surprise
[15:46] <fwereade_> marcoceppi, heh, maybe a double-edged sword there :(
[15:46] <marcoceppi> fwereade_: yeah, but it's okay. Worth it for this corner case
[15:48] <marcoceppi> It gives me piece of mind
[15:50] <bkerensa> jcastro: my interest in going is so that I could bring back some knowledge and buzz to share with our loco :) we are kind of the front lines in our region so if people ask about cloud I hope to speak competently about it
[15:50] <bkerensa> :D
[16:01] <flacoste> if i use juju from the pkgs ppa, do I need to set juju-origina in my environment.yaml file?
[16:02] <marcoceppi> flacoste: yes
[16:02] <marcoceppi> juju-origin: ppa
[16:03] <flacoste> thx marcoceppi
[16:06] <jkyle> heya
[16:06] <jkyle> is this FAQ fairly accurate? https://juju.ubuntu.com/docs/faq.html
[16:11] <imbrandon> jkyle: i'd say so, although juju is a fast moving target right now so , yea
[16:12] <jkyle> of note was the "not production ready"
[16:13] <jkyle> though, my site isn't necessarily "production"...reasonable stability and a base level of "functional" are important hehe
[16:14] <imbrandon> jkyle: well depends on your level of want to get down and dirty, a small group of us have successfully been deploying OMGubuntu.co.uk with juju for a month or so now
[16:14] <jkyle> so I'm wondering is it "stable enough" to use in other than a pure testing environment?
[16:14] <imbrandon> but thats not to so its not without hiccups now and then ask marcoceppi :)
[16:15] <jkyle> I'd also be interested in chatting with someone from the core team, maybe in a pm?
[16:16] <jkyle> I'll hit the lists up, not on the vpn for mail yet though :P
[16:16] <imbrandon> the core team is all arround , most anyone in there that speaks with a little knowhow of it is on the core members , myself am just a charmer, but i'm guessing start off with your questions in public might garner a bit more of a reaction
[16:16] <marcoceppi> jkyle: I think it's "stable enough" to use in things other than pure testing; it's more of a full blown beta at the moment, though pretty stable. The only problem I've really encountered is occasional breakage during updates of the juju package, though that shouldn't happen much if ever going forward
[16:17] <jkyle> alright, I'll give it a shot
[16:18] <marcoceppi> jkyle: based on your requirements, I'd say juju is stable enough to use, it has most of it's release features implemented and tested to my understanding
[16:18] <imbrandon> marcoceppi: speaking of, promstrangulate me today  :)
[16:19] <marcoceppi> imbrandon: after omg deployment :)
[16:19] <imbrandon> :)
[16:21] <jkyle> looking over docs. does juju do metadata collection on your nodes?
[16:22] <jcastro> bkerensa: just hang out with me, marcoceppi, and imbrandon, you'll be all set.
[16:23] <imbrandon> :)
[16:23] <marcoceppi> jkyle: how so?
[16:23] <jkyle> like chef's Ohai
[16:24] <imbrandon> not really, kinda , more like it knows from the start and keeps the state
[16:24] <jkyle> so you can query you infrastructure, e.g. knife search node 'role:mysql-master', returns a list of all nodes with roles mysql-master
[16:25] <jkyle> that might be something worthy of adding (talking to myself)
[16:25] <marcoceppi> jkyle: yes,  kind of. Juju isn't aware of anything outside of it's deployed environment. But tracks where everything is
[16:26] <imbrandon> marcoceppi: is apc and the like still on ?
[16:26] <jkyle> interesting... Could do some data collection in a charm and dump data into a db
[16:26] <imbrandon> crap wrong window
[16:45] <flacoste> i'm getting the following error:
[16:45] <flacoste>   /var/lib/cloud/instance/scripts/runcmd: 4: /var/lib/cloud/instance/scripts/runcmd: juju-admin: not found
[16:45] <flacoste> that's in cloud-init-output.log
[16:45] <flacoste> on a node
[16:45] <flacoste> after a juju bootstrap
[16:46] <SpamapS> flacoste: juju failed to install further up then
[16:47] <flacoste> Get:2 http://ppa.launchpad.net/juju/pkgs/ubuntu/ precise/main juju all 0.5+bzr519-1juju5~precise1 [496 kB]
[16:47] <flacoste> SpamapS: doesn't look like it
[16:47] <flacoste> but dpkg -L juju does say that it's not installed :-(
[16:49] <hazmat> jkyle, that notion of role is basically what juju's service units are, they are all clearly identified..
[16:49] <hazmat> you can query out from the cli all the machines of a given service
[16:50] <hazmat> juju status mysql
[16:50] <hazmat> will list out the nodes of the service named 'mysql'
[16:50] <hazmat> but its not a full inventory style like ohai
[16:51] <hazmat> it also doesn't expose the notion of querying adhoc from within a charm/service unit, because the  services a unit is explicitly related to other service, and as a result has private bi-directional channels with 'data bags'
[16:52] <jkyle> hazmat: ah, gotcha. I also saw that only one "service" can be run per node. So your services are more like a collection of services than an aggregation of smaller services (e.g. role of roles)
[16:55] <hazmat> jkyle, well a service is composed of units.. each unit representing at minimum a container, and typically a machine.. there is a separate notion of policy charms that can be deployed along side existing services... the collection and aggregation notion doesn't seem quite right.. but yes coming from a roles stamped onto machines perspective i can see where it arises.. effectively the independent unit should deliver all of the functionality required to fulfill i
[16:55] <hazmat> ts charms's interfaces.. that can be as large as small as is useful.
[16:56] <SpamapS> flacoste: it was downloaded, but never installed.. weird
[17:00] <hazmat> jkyle, ie.. so a web app charm might setup nginx and a rails app, but and expose an http endpoint, and depend on a db. with the db fulfilled by a separate service, and a reverse proxy by another.. that's the ideal anyways.. where the service's charm is doing the important bits germane to its functionality.. but the definition of a charm is pretty flexible and its exposed interfaces is pretty flexible, it could be doing the aggregation you mentioned and have th
[17:00] <hazmat> e db and proxy and multiple workers local to itself on a per unit basis (even thoughs its definitely not best practice).. the key differentation is the context independent reuse, instead of having to fork and glue a databag on to each service you want to communicate with, the charm themselves model the relationships between the services and the their communication as part of their def.
[17:10] <flacoste> how can i reset a juju bootstrap?
[17:10] <flacoste> it didn't work
[17:11] <flacoste> and i want to start from scratch
[17:11] <flacoste> zookeeper was never installed
[17:29] <hazmat> flacoste, juju destroy-environment && juju bootstrap
[17:30] <jkyle> hazmat: interesting
[17:42] <bac> hi fwereade_, i have another issue with setting constraints i'd like your help on, if you have a moment.
[17:46] <fwereade_> bac, I'm very sorry; I *might* be able to be with you in an hour or so :(
[17:46] <bac> ok
[17:46] <fwereade_> bac, if you precis it now I can answer immediately I can make it back
[17:47] <bac> fwereade_: i used this script to try to deploy two related services:  http://pastebin.ubuntu.com/926506/
[17:47] <bac> at line 13 i set the constraints i wanted, needed 8 cpus
[17:47] <bac> everything came up, but aws console shows the second service is using type m1.small.
[17:48] <bac> so it looks like the call to set constraints didn't do what i expected it to do
[17:48] <bac> this was a work-around to the issue i reported earlier where i couldn't specify the mem constraint on the deploy step
[17:55] <flacoste> hazmat: thx
[17:55] <flacoste> SpamapS: so, are you doing a juju upload today?
[17:57] <hazmat> flacoste, he's been waiting on us slackers, but yes we should be good for an upload today
[17:57] <flacoste> hazmat: cool, deadline is in 3 hours :-)
[17:58] <hazmat> flacoste, good to know :-)
[17:58] <flacoste> hazmat: make sure your tests pass this time ;-)
[17:58] <hazmat> hah
[18:01] <SpamapS> flacoste: no but juju is not in main so it will come when its ready
[18:01] <SpamapS> universe is still open (with release team approval) for a little while longer
[18:01] <flacoste> SpamapS: ah, ok
[18:01] <SpamapS> hazmat: I plan to test quite a bit before this final upload
[18:02] <SpamapS> the last one broke all existing installations
[18:02] <SpamapS> I suspect this one might too
[18:02] <hazmat> SpamapS, we've tried to be much more careful about not doing that
[18:03] <hazmat> although the subordinate work does land a change..
[18:03] <hazmat> its got transparent upgrade..
[18:04] <hazmat> but code version drift.. its not backwards compatible
[18:04] <hazmat> that landed last week
[18:05] <SpamapS> Right, so, basically, anybody who has been thoughtfully testing beta2 ..
[18:05] <SpamapS> gets screwed?
[18:05] <SpamapS> again
[18:05]  * SpamapS is feeling hostile .. sorry
[18:06] <SpamapS> hazmat: we need to start gating trunk on not breaking existing bootstrapped envs ;)
[18:07] <_mup_> juju/managed-zk-client r513 committed by kapil.thangavelu@canonical.com
[18:07] <_mup_> start the global settings watch before we start
[19:53] <marcoceppi> How do I repair an "agent state down"
[19:56] <shazzner> destroy the service and redeploy it :p
[19:57] <shazzner> mysql always goes down if I forget to break a relation and destroy it's related service
[19:59] <marcoceppi> weird, this happened because of high load on the machine. So I just did an add-unit then remove-unit for the downd agent
[20:12] <hazmat> marcoceppi, woah.. what?
[20:13] <hazmat> marcoceppi, you deployed mysql, added a relation and removed a relation, and mysql agent had an error?
[20:13] <hazmat> oh.. high load
[20:14] <hazmat> marcoceppi, there's a fix for high load/transient network problems on trunk should be in the upload to precise shortly
[20:15] <hazmat> basically if the load gets high enough that the agent can't run for a long time, its possible that it becomes disconnected, and if the disconnection is long enough its db/zk session expires. the code in trunk to allow it to restablish a new session.
[20:17] <shazzner> hazmat: awesome :)
[20:18] <marcoceppi> hazmat:  gotchya