[00:03] <jcastro> SpamapS, next time you do want to do a review though, ping me, I can at least pick up the easy ones to prescreen for ya.
[00:10] <SpamapS> jcastro: I need to get the python charm helpers into charm-tools actualy..  thats the current priority
[00:12] <jcastro> I mean an opportunistic "whenever"
[00:19] <hazmat> adam_g, if you have the provisioning agent log that would be helpful to diagnose.. is that against maas or orchestra?
[00:19] <hazmat> ooh.
[00:19] <hazmat> baremetal that is
[00:20] <hazmat> that is odd, its not even showing the unit
[00:24] <adam_g> hazmat: yeah, i watched the logs and there was nothing odd, let me go see if i can grep out that deployment
[00:24] <adam_g> its since been working
[00:25] <adam_g> this is an orchestra provider
[00:28] <adam_g> http://paste.ubuntu.com/884140/
[00:28] <hazmat> adam_g, with no units like that, it would appear the units where destroy via juju remove-unit
[00:29] <hazmat> adam_g, that's a fragment of the log
[00:30] <adam_g> hazmat: on ec2, ive seen juju get trigger happy and start taking out nodes that i've manually added to the security group. is it capable of doing similar things with the orchestra provider?
[00:30] <adam_g> hazmat: how much context would you like? the log is big
[00:31] <hazmat> adam_g, yes.. it owns the security group on ec2, and will treat things it doesn't know about on ec2 as runaways and clean them up.. that behavior is also present on orchestra
[00:31] <hazmat> adam_g, but again something would have to have removed the rabbitmq unit
[00:32] <hazmat> ie.. juju remove-unit
[00:32] <hazmat> and even then juju wouldn't kill the machine.. because it knows about it
[00:32] <hazmat> and if the machine where dead out of band, the unit would still show
[00:32] <hazmat> adam_g, i'll take as much context as you have
[00:33] <adam_g> right
[00:33] <adam_g> sure one sec
[00:33] <adam_g> http://paste.ubuntu.com/884146/
[00:34] <adam_g> ^ that is from the teardown of the previous deployment through till the deployment following the failure
[00:34] <adam_g> http://paste.ubuntu.com/884147/ <- thats the whole thing
[00:35] <SpamapS> hazmat: does zk have transactions of any kind, or could it be a transient thing caused by a timeout of some kind between client and zk?
[00:38] <hazmat> SpamapS, it has atomic operations we use, it has a limited tx in 3.4.. as for the cause of this issue, i haven't seen anything in the logs that shows me its a bug
[00:38] <hazmat> versus just acting on executed command
[00:38] <hazmat> adam_g, how are you tearing down the env?
[00:39] <hazmat> hmm.. it would be nice to get a dump of
[00:39] <hazmat> zk
[00:40] <hazmat> as is i see the unit was destroyed explicitly, and the machine to which it was assigned was removed as well
[00:40] <hazmat> a service with no units, looks like the original status output
[00:40] <adam_g> hazmat: i keep the bootstra node in place, and do something like: destroy all services, terminate all machines but the bootstrap, usually sleeping for some seconds between terminate-machine to allow power unit to catch up with requests
[00:41] <adam_g> hazmat: 'as i see the unit was destroy explicitly'... which unit? the rabbitmq that is missing its machine?
[00:41] <SpamapS> ahh so remove-unit won't clean up an empty service
[00:42] <hazmat> adam_g, its missing any  units
[00:42] <hazmat> SpamapS, yes
[00:42] <SpamapS> adam_g: does add-unit resolve things?
[00:42]  * hazmat tries to come up with a remote dump zk script
[00:43] <hazmat> yeah.. that would verify
[00:43] <adam_g> SpamapS: i can try next time i hit this...
[00:44] <hazmat> adam_g, do you have this teardown automated?
[00:44] <hazmat> adam_g, you should try the charmrunner tools
[00:44] <hazmat> hmm
[00:44] <hazmat> actually i guess the snapshot/restore assumes a local provider
[00:45] <hazmat> easy to fix though
[00:47] <adam_g> hazmat: yea, teardown is automated. i'd definitely like to combine efforts and standardize on whatever tools you guys are using at some point
[00:47] <adam_g> FWIW, i'd never seen this issue until recently though, last 1.5 week or so
[00:50] <hazmat> adam_g, pls keep that env alive for a few minutes more if not already dead
[00:50] <adam_g> hazmat: still in place
[00:50] <hazmat> adam_g, i'm almost done with a remote dump zk script
[00:54] <hazmat> adam_g, the tools are a bit split.. i've got a few useful ones in charmrunner (charm test thingy), and there are some in jujujitsu
[00:54] <hazmat> SpamapS, btw. nice name
[00:54] <hazmat> adam_g, here's the script http://paste.ubuntu.com/884162/
[00:54] <hazmat> you can just python dumpzk.py -f filen.zip -e env_name
[00:55] <SpamapS> hazmat: name?
[00:55] <hazmat> SpamapS, the jujujitsu name
[00:55] <SpamapS> Oh, hah, yeah, I love it. :)
[00:56] <SpamapS> I do hope others like the idea and want to dump more things into it.
[00:59] <adam_g> hazmat: people.canonical.com/~agandelman/zk.zip  this is from the current deployment in the same enviromment. the failed unit in that pastebin is gone by now. ill hang onto that script and dump it next time i run into the issue
[00:59] <hazmat> adam_g, cool
[01:04] <hazmat> adam_g, till then afaics from looking at status code, the rabbitmq unit was removed explicitly with juju remove-unit, and then the machine removed with juju terminate-machine
[01:06] <hazmat> adam_g, but that seems odd, since i assume your just using destroy-service and terminate-machine for cleanup
[01:06] <adam_g> hazmat: thats strange. nowhere in any of the automation we use is remove-unit called
[01:06] <adam_g> right
[01:06] <hazmat> anyways.. if you can run that script if it happens again that would be helpful
[01:06] <adam_g> for sure
[03:34] <_mup_> Bug #955677 was filed: provisioning agent crashes when deploying to a maas node <juju:New> < https://launchpad.net/bugs/955677 >
[10:14] <_mup_> Bug #955576 was filed: 'local:' services not started on reboot <juju:New> <juju (Ubuntu):Confirmed> < https://launchpad.net/bugs/955576 >
[16:11] <_mup_> juju/local-survive-restart r477 committed by kapil.thangavelu@canonical.com
[16:11] <_mup_> upstartify local provider zk
[16:31] <jamespage> \o/
[16:45] <SpamapS> hazmat: my hero! :)
[16:45] <SpamapS> lxc and the local provider have gotten much better of late
[16:49] <hazmat> SpamapS, its mostly unchanged outside of the upstartification of some bits
[16:50] <hazmat> SpamapS, there's still some love needed for the whole failure scenario around lxc-wait
[16:51] <SpamapS> hazmat: yeah thats being looked at upstream... apparently you can only have one lxc-wait running at a time, and that is the krux of the problem
[16:51] <SpamapS> crux even .. :-P
[16:51] <hazmat> SpamapS, well.. we're not properly passing it a bit mask around multiple states, we're just waiting for it to get to started, and on error it never does. but yeah.. the ability to ask it multiple times is also nice
[16:51] <hazmat> er. concurrently
[16:53] <SpamapS> hazmat: apparently it listens for a signal from lxc-start on a private socket so only one lxc-wait can be listening at one time
[16:53] <_mup_> Bug #956183 was filed: Support suspending environment <juju:New> < https://launchpad.net/bugs/956183 >
[16:53] <SpamapS> hazmat: I'm pretty sure that master-customize also seems to not error on failure of any of its commands.
[17:22] <adam_g> hazmat: around?
[17:25] <hazmat> adam_g, yes
[17:25] <hazmat> headless chicken
[17:28] <adam_g> same here heh
[17:29] <SpamapS> maybe try a tournequette to stop the bleeding?
[17:30] <adam_g> hazmat: so there seems to be some issues ATM /w juju + essex, which i think are security group related.  i was going to see if you had a script/doc around that mimicks the boto calls juju runs in the ec2 provider. i was having trouble recreating using euca2ools. i can extract it all myself if you're bogged down, but figured id check first
[17:33] <hazmat> un momento
[17:35] <hazmat> adam_g, http://paste.ubuntu.com/885124/
[17:36] <hazmat> those are all the calls, but re security groups, there is one for the environment, and then one per machine
[17:36] <SpamapS> adam_g: one thing.. juju uses txaws, not boto
[17:36] <hazmat> the environment has a rule to allow for internal group access
[17:36] <hazmat> and then ones per machine are manipulate to allow for external access as the services with units on a given machine are exposed
[17:38] <hazmat> the environment group is also used to help identify which machine in the provider juju has responsibility for, ie as a form of tagging
[17:43] <adam_g> hazmat: thanks, ill check those. id like to be able to recreate the same security groups + rules manually on ec2 and nova. i think theres something screwy going on with rules that reference other groups
[17:45] <adam_g> great idea>> iptables-save for ec2 security groups
[17:47] <hazmat> adam_g, yeah.. it was a bit wonky last cycle as well for self-referential security group rules, ie. the metadata looked suspect, i think it worked well because effectively the enforcement wasn't in place.
[18:08] <jcastro> SpamapS, I've got two incoming charms that need a round 2 review
[18:08] <jcastro> and m_3 is chilling at some ruby conference
[18:08] <jcastro> but these will be easy. :)
[18:08] <SpamapS> cool
[18:09] <jamespage> jcastro, I can prob pickup some review later tomorrow if that would help?
[18:09] <jcastro> subway IRC and Alice IRC.
[18:09] <jcastro> jamespage, actually what would help is you monitoring the incoming queue on occassion
[18:09] <jcastro> let me get you a link
[18:09] <jamespage> jcastro, sure
[18:10] <jamespage> maybe we should try to doing something pilot'ish like we do for Ubuntu dev?
[18:10] <jcastro> yeah
[18:10] <jcastro> for now though:
[18:10] <jcastro> https://bugs.launchpad.net/charms/+bugs?field.tag=new-charm
[18:10] <jcastro> any of the new ones
[18:11] <jamespage> like saltmaster or gearman?
[18:11] <jcastro> and Fix Committed
[18:11] <jcastro> saltmaster is incomplete, updated
[18:11] <jcastro> fix committed is when the person was incomplete then wants another review
[18:13] <jamespage> rightoh
[18:13] <jamespage> and New is up for first review?
[18:13] <jcastro> right
[18:21] <_mup_> juju/refactor-machine-agent r461 committed by jim.baker@canonical.com
[18:21] <_mup_> Merged trunk & resolved conflict
[18:53] <_mup_> juju/relation-reference-spec r6 committed by jim.baker@canonical.com
[18:53] <_mup_> Initial commit
[19:34] <_mup_> juju/relation-hook-commands-spec r6 committed by jim.baker@canonical.com
[19:35] <_mup_> Initial commit
[19:42] <_mup_> juju/relation-info-command-spec r6 committed by jim.baker@canonical.com
[19:42] <_mup_> Initial commit
[19:42] <_mup_> Bug #956352 was filed: Enable relation hook commands to work with arbitrary relations. <juju:In Progress by jimbaker> < https://launchpad.net/bugs/956352 >
[19:45] <_mup_> juju/juju-status-changes-spec r6 committed by jim.baker@canonical.com
[19:45] <_mup_> Initial commit
[19:47] <_mup_> Bug #956357 was filed: Fix `juju status` bug when working with multiple relations for a service. <juju:In Progress by jimbaker> < https://launchpad.net/bugs/956357 >
[19:52] <_mup_> Bug #956372 was filed: Add `relation-info` to list relation ids associated with a service <juju:New> < https://launchpad.net/bugs/956372 >
[19:56] <_mup_> Bug #956377 was filed: Enable unambiguous reference to relations by using a relation id <juju:In Progress by jimbaker> < https://launchpad.net/bugs/956377 >
[21:15] <jcastro> SpamapS, this might be more of an m_3 question but
[21:16] <jcastro> if I want to see a big list of what charms are currently failing tests and that I should be looking to fix I go to .... ?
[21:17] <SpamapS> hm, why does yaml.dump have to make such ugly yaml?
[21:17] <SpamapS> jcastro: charmtests.markmims.com is what I've been looknig at
[21:18] <SpamapS> Looks dead tho
[21:19] <jcastro> bummer
[21:20] <SpamapS> jcastro: Its a single charm, so you can also just deploy it.. ;)
[21:21] <SpamapS> ahh.. defualt_flow_style=False helps
[21:22] <SpamapS> jcastro: how much would you love a juju-jitsu subcommand called 'setup-environment' that did Q&A to fill in the blanks?
[21:22] <jcastro> I would have a party
[21:22] <SpamapS> jcastro: polishing it off now
[21:22] <jcastro> hey is this in the PPA yet?
[21:23] <SpamapS> no
[21:23] <SpamapS> still pretty raw.. so.. bzr branch and play..
[21:23] <jcastro> oh dude
[21:23] <jcastro> you put the gource thing in here
[21:24] <SpamapS> jcastro: yes!
[21:24] <SpamapS> jcastro: just run it.. you get a gourcer on your default environment. :)
[21:51] <m_3> jcastro SpamapS charmtests back up
[21:52] <m_3> hit by the overly-strict type checking across the whole repo
[21:52] <SpamapS> m_3: did you pull the latest changes? I fixed most of them over the last week.
[21:53] <m_3> SpamapS: I did... essentially `charm list | grep lp:charms`
[21:54] <SpamapS> m_3: thats part of why I added the new --fix stuff to 'charm update'
[21:55] <m_3> I thougth that was for existing local repos.. this wipes andclean branches
[21:56] <m_3> win 17
[23:53] <SpamapS> jcastro: there's a little surprise waiting for you on your blog ;)
[23:58] <m_3> hazmat: lots of progress!  charmtests.markmims.com
[23:58] <m_3> looks like most of them are completing the graph runs without hanging
[23:59] <hazmat> m_3, nice