#ubuntu-ensemble 2011-04-25
<niemeyer> Good morning!
<hazmat> niemeyer, welcome back
<hazmat> niemeyer, the sky fell after you left ;-)
<niemeyer> hazmat: Danke!
<hazmat> the clouds rained
<niemeyer> hazmat: Yeah, I was half-following the news.. man, that was interesting
<niemeyer> hazmat: Any news on what actually happened?
<hazmat> niemeyer, some interesting issues came up wrt to ensemble
<hazmat> niemeyer, nutshell us-east-1 data center experienced a multi availability zone outage, affecting anything touching ebs
<niemeyer> hazmat: Do they know why?  Or rather.. have they published why?
<hazmat> with internal network saturation due to ebs replication that impacted pretty much all services
<niemeyer> Ah, ok
<hazmat> niemeyer, they haven't published outside of saying 'network event'
<niemeyer> That's so lame
<hazmat> which triggered the ebs remirroring
<niemeyer> Increases the distrust
<hazmat> niemeyer, i fixed up ensemble's region portability during the outage ;-)
<niemeyer> hazmat: That's awesome :-)
<hazmat> niemeyer, we have some other interesting problems as well, i was just doing a write up for the list
<niemeyer> hazmat: I can imagine some of it
<hazmat> mostly relating to the fact that we're still using ubuntu zk packages
<hazmat> niemeyer, its unrelated
<niemeyer> Ah, ok
<niemeyer> I can't, then :-)
<hazmat> and that causes random segfaults in our agents now
<niemeyer> Ugh
<hazmat> our unit agents have reach sufficient complexity
<hazmat> niemeyer, its actually a really nice opportunity
<niemeyer> Let's see if we can get a hand to get that fixed
<hazmat> to test fault resilience with a  random fault injector
<niemeyer> :-)
<hazmat> niemeyer, two separate multi-step tracks, fix the packaging, fix the fault resilience
<niemeyer> "Fix the fault resilience" feels like a long chain
<hazmat> niemeyer, i paused on the resolved work as well  (two branches in review), i wanted to discuss options for the implementation of the sans-hook transitions
<niemeyer> Ok
<hazmat> niemeyer, well three parts afaics, with some details..
<niemeyer> hazmat: But what's in the queue is good to go, right?
<hazmat> re fault.. agents monitoring launched agents, queue with fs durability
<hazmat> niemeyer, yes
<niemeyer> hazmat: Cool, sound like good topics
<hazmat> niemeyer, i'm gonna run a quick errand, but if your game in 15m, i'd like to do a quick skype on the resolved stuff
<niemeyer> hazmat: Sounds good
<hazmat>  niemeyer i'm on skype, ping me when your ready
<niemeyer> return self._invoke_lifecycle(self._lifecycle.start, nohooks=True)
<hazmat> niemeyer, def start(self, fire_hooks=True)
<niemeyer> hazmat: ^
<hazmat> niemeyer, yup
<hazmat> def start(self, transition_context)
<hazmat> transition_context == object with origin_state, destionation_state, state variables, transition arguments
<jimbaker> hazmat, nice analysis of the problem we were seeing
<hazmat> jimbaker, yeah... its nicer to think of as a random fault injector than crappy code ;-)
<jimbaker> :)
<hazmat> hmm... looks like my network provider is blocking post commit message to labix
<niemeyer> hazmat: Huh
<niemeyer> http://blog.rightscale.com/2011/04/25/amazon-ec2-outage-summary-and-lessons-learned/
<niemeyer> Pretty good write up
<hazmat> niemeyer, lots of good write ups, the rightscale has a nice set of links, the joyeur/joyent one is nice as well
<niemeyer> hazmat: Yeah, the RS one feels the closest from what I would expect the *official* post from Amazon to look like
<niemeyer> One of the funny aspects of EBS volumes is that they keep the actual machine disk more available for those that choose to use it
<niemeyer> "How SmugMug survived the Amazonpocalypse >> (...) Third, we donât use Elastic Block Storage (EBS), which is the main component that failed last week."
<niemeyer> Major DUH
<hazmat> niemeyer, yeah.. that and the joyent post got me thinking about rethinking persistence and opening up the choice to formula authors
<hazmat> one thing at a time
<niemeyer> hazmat: Wasn't that the plan since the very early conversations?
<niemeyer> hazmat: IIRC the EBS-only strategy was introduced just because it was a simple way for us to get started without risking blowing people's data
<hazmat> niemeyer, it was, but last i mentioned a month or two back, you where suggesting just using ebs instances and not worrying about it
<hazmat> instead of spec'ing persistent directories, not clear if that was intended from a priority perspective or was a long term plan
<niemeyer> hazmat: For now that still feels like a good plan
<niemeyer> hazmat: I see.. FWIW I don't see inherent problems with supporting non-EBS formulas
<hazmat> niemeyer, the goal i was considering is not requiring ec2 ebs instances for such formulas
<niemeyer> hazmat: THat's what I'm talking about as well
<hazmat> niemeyer, great
<hazmat> niemeyer, is the endpoint to the post commit publishing bot on labix running?
<niemeyer> hazmat: I don't know.. have to check that
<niemeyer> hazmat: FWIW, it's not actually labix.. I just hosted the domain there.. the bot lives within one of the Landscape test servers
<hazmat> niemeyer, ah.. right on i was wondering about that
<SpamapS> hazmat: can you explain why there is an "ensemble ami" ?
<hazmat> SpamapS, good question, ideally there shouldn't be one
<SpamapS> Should be able to do anything w/ cloud-init that you need to do.
<hazmat> SpamapS, we ended up creating one because the bootstrap time was significant if we installed from scratch
<hazmat> ie. downloading java and updating packages, added several minutes to our startup
<hazmat> SpamapS, plus checking out all the ensemble repos
<SpamapS> Yeah.. thats a valid reason to go AMI vs. cloud-init
<hazmat> cloud-init had some failings in that regard as well, wrt to only logging output to the console log in the maverick cycle
<hazmat> we'd be on the machine and wondering what happened for like 10m till it showed in the ec2 get-console-output api
<SpamapS> I've even wondered if it would be a worthy later optimization for machine providers to be able to rebundle after the install hook fires. :)
<hazmat> that's better now
<hazmat> yeah.. unit snapshotting would be nice, and a viable strategy for some services
<hazmat> i'm really interested in serge's work with btrfs and lxc, to be discussed at uds-o
<SpamapS> yeah very cool stuff there
<SpamapS> So the fault tolerance of the agents.. is this just as simple as respawning it if it dies?
<SpamapS> We have this thing in Ubuntu called upstart that does that. ;)
<hazmat> SpamapS, its two things, its making sure state is on disk, and respawning
<SpamapS> bcsaller: hey! I spent this past weekend reading the first section of my new copy of "4 hour body" btw.. Thanks for the recommendation.. great book so far.
<hazmat> SpamapS, but the respawn is potentially a machine not just a process
<SpamapS> hazmat: oh.
<bcsaller> SpamapS: glad you liked it
<hazmat> SpamapS, ie. if we kill  a machine agent, the provisioning agent may have to start a new machine to recover
<hazmat> SpamapS, also upstart is fairly static is my understanding
<hazmat> ie you don't load new services to be managed at runtime
<hazmat> hmm. actually i guess you do
<hazmat> hence package installs using upstart
<SpamapS> hazmat: upstart just makes a best effort at keeping it running. It will give up after a while too.. so its not a perfect solution.
<hazmat> SpamapS, is that based on total number of restarts or restarts within a timespan?
<SpamapS>        respawn limit COUNT INTERVAL
<SpamapS>               Respawning  is  subject to a limit, if the job is respawned more than COUNT times in INTERVAL seconds
<SpamapS> hazmat: 'man 5 init'
<hazmat> SpamapS, thanks
<robbiew> SpamapS: ping
 * robbiew goes from room to room
<robbiew> trying to get SpamapS attention...must be running a crap irc client
<robbiew> lol
<jimbaker> robbiew, i know what you mean... i like xchat, but it tends to only work w/ one room at a time from being able to see stuff going on of interest
<jimbaker> including being pinged :)
<robbiew> jimbaker: pidgin for the world!!!!!!
<robbiew> :P
<jimbaker> robbiew, not pidgin!!! ok i was unaware of that capability... the naive install i did simply opened lots and lots of windows, i couldn't take it
<SpamapS> I tend to hide my IRC until I am ready to be interrupted
<robbiew> jimbaker: oh..the tabbed view rocks
<robbiew> I put mine on the side
<jimbaker> SpamapS, good rendezvous protocol ;)
<robbiew> if you have a ThinkPad...the ThinkLight plugin is AWESOME
<robbiew> flashes the light when my nick is spoken...so I get notified, while muted ;)
<SpamapS> wow
<SpamapS> that actually sounds cool
<SpamapS> I wonder if I can do that w/ the MBP's light
<niemeyer> I'll get a bite
<jimbaker> we have reinvented the circa 90s office phone
<hazmat> SpamapS, re deb packaging for ensemble deps, i've got a script in ensemble/debian/ec2-build.. not sure if you've looked at it, but it basically just pulls the 3.3 branch of zk and builds the deb on an ec2 machine.. we should be fine with just a deb from the 3.3.3  release tarball.. i'm interested in learning more about what the process is.
<SpamapS> hazmat: Yeah thats cool actually. :) I have to run now, but lets talk in about an hour.
<hazmat> SpamapS, awesome, let's pick it up tomorrow
<hazmat> SpamapS,  we can talk later today.. but as far digging into doing it, tomorrow would be better
<SpamapS> hazmat: ack
<hazmat> hmm. the ubuntu packaging docs are much better than the debian new maintainer guide
<koolhead17> kim0: around?
<koolhead17> hi hazmat
<jimbaker> hazmat, running trunk with test, i'm getting a failure on ensemble.providers.ec2.tests.test_utils.EC2UtilsTest.test_get_machine_options_defaults (http://paste.ubuntu.com/598962/)
<jimbaker> bcsaller, i'm also seeing failures with your refactor-to-yamlstate branch (which is why i looked at trunk and did a fresh install of our dependencies so i could move to python 2.7 in my virtualenv)
<bcsaller> jimbaker: any tracebacks?
<jimbaker> bcsaller, here's the full traceback from test - http://paste.ubuntu.com/598963/
<jimbaker> most of those look spurious
<jimbaker> but in isolating, http://paste.ubuntu.com/598965/ looks relevant to the changes you made
<bcsaller> jim: thanks, I think some of those were in a later branch, I didn't think what I pushed was impacted by that 
<jimbaker> bcsaller, cool. i'm trying to base a branch on refactor-to-yamlstate
<bcsaller> yeah, thats the set taking a dict rather than a YAML dict string change
<jimbaker> i know it's sort of early to do so, but it seemed the easiest way to keep our work from conflicting on HookContext
<jimbaker> right now, i'm just going to hold off on the hook command changes
#ubuntu-ensemble 2011-04-26
<hazmat> jimbaker, hmm.. i might have modified that when i was doing the ec2 testing
<hazmat> yeah.. its the test needs a trivial fix
<jimbaker> hazmat, yeah, it looks pretty trivial for sure
 * hazmat continues to try his luck with debs
<SpamapS> hazmat: heh, want some help? :)
<SpamapS> hazmat: I've found 'mk-sbuild' to be super helpful. :) its part of 'ubuntu-dev-tools'
<SpamapS> hazmat: sbuild will take a debian source package and build it in a clean environment, much like your ec2 build, but on the local machine.
<hazmat> SpamapS, cool, is it based on pbuilder?
<hazmat> or i perhaps the other way around
<hazmat> SpamapS, i dug through some more of the ubuntu packaging docs last night, some good material
<hazmat> i think i was making due with debian new maintainer's guide and UTSL on source-packages last time around
<SpamapS> hazmat: pbuilder is, honestly, kind of a pain in the ass
<SpamapS> hazmat: sbuild builds you a chroot that you can jump into when there are build problems..
<SpamapS> I think pbuilder can do that too..
<SpamapS> but I've just had tons more success w/ sbuild than I used to w/ pbuilder
<SpamapS> hazmat: so what are you stuck on?
<hazmat> SpamapS, at the moment just trying to close the loop on getting new images built with the existing packaging stuff
<hazmat> on natty
<niemeyer> Hey guys!
 * niemeyer heads to lunch
<hazmat> using the ec2-build stuff.. i figure after ensemble works, after starting in on packages the 'proper' way
<jimbaker> niemeyer_lunch, have a good lunch too :)
<niemeyer> jimbaker: Thanks
<niemeyer> jimbaker: Was good indeed
<jimbaker> niemeyer, i do remember how well we ate in brazil. my wife still talks about the breakfast at our hotel in recife
<jimbaker> (she loves fruit, every kind)
<_mup_> Bug #771348 was filed: status barfs when checking newly deployed service units (no state) <Ensemble:New> < https://launchpad.net/bugs/771348 >
<hazmat> what's the next milestone after budapest? i'd like to start triaging some bugs over to the next release?
<jimbaker> is that the lxc sprint?
<hazmat> jimbaker, well its roughly corresponding to ubuntu releases, perhaps we should cycle over and use oneiric next
<bcsaller> or just call it .next till we have a place-name to use for it
<bcsaller> assuming you can rename a milestone
<hazmat> although we need to get most of the  dev release done by th end of the oneiric dev cycle not the official release afaicr
<hazmat> bcsaller, +1
<niemeyer> We could set a milestone to the next sprint after UDS
<hazmat> just created next and later milestones, to be renamed/dates as needed
<_mup_> ensemble/unit-agent-resolved r252 committed by kapil.thangavelu@canonical.com
<_mup_> parameterize unit lifecycle method with fire_hooks boolean, default true
<niemeyer> hazmat: Hmm, that may be a rabbit hole
<niemeyer> hazmat: Let's decide on that
<niemeyer> hazmat: Assigning features to a point in time we don't know quickly becomes a black hole
<niemeyer> hazmat: There's no way to say "No, let's not schedule for that timing"
<hazmat> niemeyer, both the next, later have dates targeted towards the end of the ubuntu release cycle
<hazmat> er. release dev cycle
<hazmat> for the next two releases
<niemeyer> hazmat: We need shorter milestones
<hazmat> niemeyer, two month cycles?
<niemeyer> hazmat: Yeah, something around that sounds better
<hazmat> continous release ;-)
<niemeyer> hazmat: Yeah, what are we doing next month in our continuous release? ;-)
<_mup_> ensemble/ec2-deb-build r211 committed by kapil.thangavelu@canonical.com
<_mup_> switch ec2-build over to natty
<_mup_> ensemble/ec2-deb-build r212 committed by kapil.thangavelu@canonical.com
<_mup_> additional zk deb dep, installed by hand since using dpkg to install custom binary debs atm.
 * hazmat lunches
<niemeyer> hazmat: Enjoy
<hazmat> hmm.. still not working
<_mup_> ensemble/unit-agent-resolved r253 committed by kapil.thangavelu@canonical.com
<_mup_> rename state variables to transition variables, pass transition variables to transition
<jimbaker> hazmat, re status barfing, my reading of the logic should be that the relation_status dict should never get into a bad state that yaml or json cannot serialize
<jimbaker> but obviously something did happen
<jimbaker> i wonder if it were based on another exception than the one that's being caught?
<hazmat> jimbaker, i had tracked it down last week and modified it locally 
<hazmat> i don't recall the exact detail though atm.. it was being caught
<jimbaker> hazmat, weird, because that codepath is definitely being exercised
 * hazmat looks for the diff
<koolhead17> hi all
<jimbaker> if we remove the logic, then test_misconfigured_provider will cause problems in the status collection
<hazmat> hmm.. blew it away
<hazmat> jimbaker, misconfigured provider is testing something different
<hazmat> maybe not.. rather hard to tell what is testing actually
<hazmat> jimbaker, yeah.. that test is testing that machine id is known to the provider
<hazmat> nothing to do with the unit relation state
<jimbaker> hazmat, probably want to add a test to directly test this case
<jimbaker> hazmat, i reverified that test_misconfigured_provider will do this. maybe it really needs to be split into a separate test to show this more directly with respect to a unit state without the corresponding unit relation state
<hazmat> jimbaker, yeah.. a docstring describing what the test is testing would be helpful
<niemeyer> hazmat: set_resolved_relation({"db": Retry})
<niemeyer> hazmat: set_resolved_relation({"db": DontRetry})
<jimbaker> bcsaller, here's the pastie for my test run of refactor-to-yamlstate: http://paste.ubuntu.com/599439/
<bcsaller> thanks, I'll look into it
<jimbaker> bcsaller, this is to r198
<hazmat> its interesting watching both plone and openstack go through the 'choice' of settling in github
<hazmat> simulatenously
<niemeyer> hazmat: What was the branch you said should hold on reviews?
<niemeyer> hazmat: ensemble-alternate-regions?
<jimbaker> niemeyer, that's correct
<niemeyer> jimbaker, hazmat: Ok, I'm moving it to WIP then
<hazmat> niemeyer, thanks.
<niemeyer> hazmat: No problem
<_mup_> ensemble/ec2-deb-build r213 committed by kapil.thangavelu@canonical.com
<_mup_> try a new image
<hazmat> woot! ensemble works again
<jimbaker> hazmat, good to hear. let me try it out too
<hazmat> jimbaker, if you use ec2-deb-build and spec it as your ensemble-branch is what's needed at the moment, only us-east-1
<hazmat> i'm remastering images in other regions atm
<jimbaker> hazmat, that's what i figured, thanks
<hazmat> SpamapS, do you use mk-sbuild with lvm or btrfs?
<jimbaker> hazmat, looking good: http://ec2-50-16-177-41.compute-1.amazonaws.com/
<hazmat> jimbaker, good stuff
<hazmat> SpamapS, so with mk-sbuild i take that it reuses a snapshot as a pristine image for subsequent builds
<_mup_> ensemble/ec2-deb-build r214 committed by kapil.thangavelu@canonical.com
<_mup_> ensemble natty images w/ updated zk in all five aws regions
<SpamapS> hazmat: I use it as aufs
<SpamapS> s/as/with/
<SpamapS> hazmat: which is its default
<SpamapS> hazmat: no extra setup required. :)
<hazmat> cool
<SpamapS> hazmat: granted, lvm and btrfs will be faster, and not have a filename length limit so short as aufs.
#ubuntu-ensemble 2011-04-27
 * hazmat pokes at upstart
<hazmat> manual respawn
<hazmat> SpamapS, re upstart.. i was experimenting and had this https://pastebin.canonical.com/46732/
<hazmat> seems like i'm missing something
<hazmat> upstart sees the job but won't run it
<SpamapS> hazmat: initctl reload-configuration
<SpamapS> hazmat: you shouldn't have to do that. But I think there may be something wrong w/ the inotify
<hazmat> SpamapS, thanks
<hazmat> SpamapS, the integration afaics at least for units, would be the machine agent dropping in new files per unit.. i looked at using instances, but we need several parameters.. still experimenting
<hazmat> SpamapS, actually that was user error.. missed a sudo
<hazmat> i think the inotify did pick it up
<hazmat> hmm
<hazmat> hmm.. i guess we'll still need to pass env vars for some scenarios where the value shouldn't be in a ps aux listing.
<hazmat> but upstart definitely adds value
<SpamapS> hazmat: Not sure I understand. You have ensemble's agent that needs to be running on the machine. Are you saying there are times you'll want two of them running?
<niemeyer> Morning!
<niemeyer> Woot, empty review column again
<hazmat> niemeyer, nice
<_mup_> ensemble/ensemble-alternate-regions r210 committed by kapil.thangavelu@canonical.com
<_mup_> additional test verifying precendence of default-image-id over region
<_mup_> ensemble/ensemble-alternate-regions r211 committed by kapil.thangavelu@canonical.com
<_mup_> fix config test, to use prepulated data for authorized key, revert change to provider serialization, as authorized key is still required.
<_mup_> ensemble/ensemble-alternate-regions r212 committed by kapil.thangavelu@canonical.com
<_mup_> rename resolve_region_uri/get_region_uri, yank the static region url map for string interpolation template
<_mup_> ensemble/ensemble-alternate-regions r213 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> ensemble/expose-provisioning r210 committed by jim.baker@canonical.com
<_mup_> Watch services in provisioning
<_mup_> ensemble/ensemble-alternate-regions r214 committed by kapil.thangavelu@canonical.com
<_mup_> document new ami per region, and precedence of ec2-uri over region.
<_mup_> ensemble/trunk r210 committed by kapil.thangavelu@canonical.com
<_mup_> merge ensemble-alternate-regions [r=niemeyer][f=768320]
<_mup_> Allows for ensemble to work in different aws regions via
<_mup_> environments.yaml specification of an environment machine provider
<_mup_> region (us-east-1, us-west-1, eu-west-1, ap-northeast-1,
<_mup_> ap-southeast-1). Also validates allowed region values as part of the
<_mup_> ec2 environment schema.
<_mup_> ensemble/expose-provisioning r211 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/expose-dummy-provider r210 committed by jim.baker@canonical.com
<_mup_> Cleanup
<_mup_> ensemble/expose-dummy-provider r211 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/ec2-deb-build r215 committed by kapil.thangavelu@canonical.com
<_mup_> merge ensemble-alternate-regions
<_mup_> ensemble/ec2-deb-build r216 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> ensemble/expose-provisioning r212 committed by jim.baker@canonical.com
<_mup_> Merged expose-dummy-provider
<_mup_> ensemble/ec2-deb-build r217 committed by kapil.thangavelu@canonical.com
<_mup_> extract ami to constant for debian/ec2-build
<_mup_> ensemble/trunk r211 committed by kapil.thangavelu@canonical.com
<_mup_> merge ec2-deb-build [r=niemeyer][f=769286]
<_mup_> Updates all aws regions with new ensemble images, that utilize latest
<_mup_> zookeeper release (3.3.3), this fixes several zookeeper python binding
<_mup_> problems that caused agents to crash.
<SpamapS> Dunno if you guys caught this, but it seems netns (as in, the network isolation part of lxc) has been removed from the lucid kernel in the latest SRU
<SpamapS> Something to consider when testing LXC on lucid.. need to get the maverick or (better) natty backport kernel in order to have proper LXC
<_mup_> ensemble/resolved-spec r190 committed by kapil.thangavelu@canonical.com
<_mup_> resolved spec fixes per review.
<_mup_> ensemble/trunk-merge r188 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> ensemble/resolved-spec r192 committed by kapil.thangavelu@canonical.com
<_mup_> fix an ec2 provider test pointing to an old ami :-(
<hazmat> SpamapS, noted, though we're not really on lucid anymore at this point... everything is natty based atm
<SpamapS> hazmat: hahahahahahahahahahahahahahahahaha
<SpamapS> hazmat: sorry I just cackle when people think serious users will use anything non-LTS ;)
<SpamapS> *even* in the cloud
<hazmat> SpamapS, hmm.. true, but until we get ourselves working with the std ubuntu amis, they don't have many options
<SpamapS> hazmat: I think its safe to say Ensemble will become what its supposed to become around 12.04 .. but 10.04 is going to be where people start hacking on it.
<SpamapS> Sounds to me like ensemble should actually become part of the standard AMI's that we produce.
<hazmat> SpamapS, we could produce/set those for lucid
<hazmat> SpamapS, yeah.. that make things a bit easier.. 
<SpamapS> Once its in main, we can do that.
<SpamapS> Until then, ensemble will have to cheat a little.
<hazmat> in for a penny, in for a pound ;-)
<SpamapS> Either w/ cloud-init or its own "speshul" ami's
<SpamapS> How's the packaging going?
<SpamapS> Sounds like the zookeeper bits only affect the ami's you're working on..
<hazmat> SpamapS, i finished by 'special' as in special ed.. zookeeper debs.. i need to go make and start fresh on a  clean deb for zk 3.3.3
<hazmat> s/by/my
<SpamapS> we're only using the special AMI's for the bootstrap node right?
<hazmat> SpamapS, all the images have been remastered, and ensemble works now in different regions
<hazmat> SpamapS, no.. we're using them for all nodes..
<SpamapS> thats hot. ;)
<SpamapS> oh thats not
<hazmat> its more than a bit hokey.. 
<hazmat> we don't need zk running on the other nodes
<SpamapS> is that just because we haven't gotten around to the bit that will feed in cloud-config stuff for the agents?
<hazmat> nor java installed
<_mup_> ensemble/expose-ensemble-commands r215 committed by jim.baker@canonical.com
<_mup_> Addressed review comments
<hazmat> SpamapS, we have that, and originally we where using cloud-init for driving everything at instance startup.. but doing things like checking out ensemble,txzookeeper from trunk are still in there and take time.
<hazmat> so same basic argument as the bootstrap.. 
<hazmat> once we get zk, ensemble, txzoo in a ppa, we can move to just installing them in cloud-init 
<hazmat> and ditch the 'special' amis
<SpamapS> hazmat: checking out? Come on.. you've got a PPA and a .deb :)
<hazmat> the latency for the bootstrap would still exist (installing java stack and friends).
<SpamapS> hazmat: we have them in ~ensemble/ppa now
<hazmat> SpamapS, :-) true, but at the moment its very helpful to ensure trunk is always working
<_mup_> ensemble/expose-ensemble-commands r216 committed by jim.baker@canonical.com
<_mup_> Modified too_many_args tests to update to argparse 1.2/python 2.7 error messages
<hazmat> SpamapS, yeah.. we should discuss doing a real release based on that ppa for uds.
<SpamapS> hazmat: this part really could get tricky
<hazmat> living on trunk is probably more dev oriented, but considering how young and rapid the pace, its still our quickest way forward, but its not clear if we want to subject early adopters to the same.
<SpamapS> hazmat: Its important that the bootstrap node know it can talk to the spawned machines.. so it should be very careful how and where it tells the machines to grab their agent
<hazmat> SpamapS, indeed, that is a problem now
<hazmat> trunk skew between deploys
<SpamapS> I think the default would be to have the bootstrap node try to apt-get install ensemble=${binary:Version}
<hazmat> i've got an open ticket that we should at least be recording the rev/release number per machine so its identifiable.
<SpamapS> And have an environment and service override available for a different version if need be
<_mup_> ensemble/trunk r212 committed by kapil.thangavelu@canonical.com
<_mup_> merge resolved-spec [r=niemeyer][f=767964]
<_mup_> ensemble resolved subcommand user documentation specification. 
<SpamapS> hazmat: one interesting idea is to have the bootstrap node mirror the PPA at bootstrap time
<SpamapS> hazmat: that would provide stasis for the environment.. just tell all nodes to pull from it.
<SpamapS> hazmat: but that would also mean it becomes a SPOF
<SpamapS> hmmm I like this idea tho
<SpamapS> ensemble bootstrap --mirror=ppa:ensemble/ppa
<SpamapS> hazmat: technically you could also have the bootstrap machine *create* an ami for the environment from that information alone
<hazmat> SpamapS, if we can record the revno of the bootstrap node, we can use that to freeze subsequent deploys from an lp branch
<hazmat> against a ppa its not as clear, do ppas always keep historical debs..
<hazmat> time for some lunch errands
<_mup_> ensemble/expose-ensemble-commands r217 committed by jim.baker@canonical.com
<_mup_> PEP8
<_mup_> ensemble/expose-ensemble-commands r218 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/trunk r213 committed by jim.baker@canonical.com
<_mup_> merged expose-ensemble-commands [r=niemeyer][f=767407]
<_mup_> Implements "ensemble expose" and "ensemble unexpose" subcommands,
<_mup_> which set and remove a flag znode, **/services/<internal service
<_mup_> id>/exposed**, respectively.
<_mup_> ensemble/expose-provisioning r213 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<hazmat> i'm starting to feel like we should have a more general way of encapsulating these state requests, (debug/expose/resolved/logging/etc).
<hazmat> we're adding lots of LOC reimplementing the same pattern over and over
<hazmat> niemeyer, ^
<niemeyer> hazmat: Which state requests, more specifically?
<niemeyer> hazmat: Getting, saving, watching?
<hazmat> niemeyer, debug/expose/resolved/logging are all marking states from the cli to denote a user request, that is completed by agent, get/set/clear/watch are common to all of them
<niemeyer> hazmat: Indeed
<niemeyer> hazmat: Good part of it is already abstracted away, though
<niemeyer> hazmat: yield self._client.create(self._exposed_path)
<hazmat> well ideally yaml_state.flush()
<niemeyer> hazmat: Agreed, anyway.  I'm just saying that we're already on the way to abstracting them away
<niemeyer> hazmat: Exactly.. the above is simpler still for this case, though
<niemeyer> hazmat: Watching could get some love next
<hazmat> niemeyer, maybe.. the duplication still feels like its missing an encapsulation pattern
<niemeyer> hazmat: YAMLState was already awesome in that regard
<hazmat> true
<niemeyer> hazmat: Hmmm.. YAMLState.watch? :-)
 * niemeyer teases hazmat
<hazmat> niemeyer, perhaps.. i'm thinking of some sort of pattern where we could do   _debug_api = DebugRequest() \n set_debug = _debug_api.set(callback)  .. at the class level with watches etc.
<hazmat> what a good pattern here isn't entirely clear.
<jimbaker> niemeyer, indeed, i need something like a watch for the yaml state corresponding to opened ports
<niemeyer> hazmat: I was pondering a bit over jimbaker's branch too
<hazmat> we need watch callback separation and possibly set pre flush callbacks for validation
<niemeyer> hazmat: The detail is that there's still some custom logic here and there
<hazmat> niemeyer, yeah.. the resulting encapsulation would need to factor that out
<niemeyer> hazmat: Like, "Oh, that's fine to ignore on conflicts..  ah, that's what the watch should return.." etc
<niemeyer> hazmat: Right, and that's another abstraction layer
<jimbaker> i see that kim0|vacation is presumably on vacation
<hazmat> either as subclass behavior to a  generic yaml state request api or via pre invoke, and callbakcs
<hazmat> jimbaker, yeah.. for this week
<niemeyer> hazmat: There's nothing that another abstraction layer can't solve.. except the excess of abstraction layers..  so that saying goes. :-)
<hazmat> niemeyer, which is solved by another layer ;-)
<niemeyer> jimbaker: Hmmm, that's a clever assumption.. ;-D
<hazmat> slightly offtopic... USG data center consolidation, move 2 clouds  http://online.wsj.com/article/SB10001424052748704729304576287431386089352.html?mod=WSJ_newsreel_politics 
<SpamapS> hazmat: notice that there is no race between upstart's inotify and starting the job. start automatically looks for the config file before trying to start the job.
<hazmat> SpamapS, nice, that's good to know, thanks
<niemeyer> http://blog.rightscale.com/2011/04/25/amazon-ec2-outage-summary-and-lessons-learned/
<hazmat> hmm..
<hazmat> looks like natty will need a ppa kernel for lxc-attach
<_mup_> ensemble/resolved-state-api r197 committed by kapil.thangavelu@canonical.com
<_mup_> use properties for computed resolved paths
<_mup_> ensemble/expose-provisioning r214 committed by jim.baker@canonical.com
<_mup_> Setup watches incrementally from a service being exposed to ports being opened
#ubuntu-ensemble 2011-04-28
 * niemeyer => bed.. night all!
<_mup_> ensemble/resolved-state-api r198 committed by kapil.thangavelu@canonical.com
<_mup_> refactor resolved state api to have a more 'friendly' api feel, via relation-set-resolved taking relation-name instead of internal id.
<hazmat> niemeyer, on the resolved state refactor.. the relation resolved set now takes unit names, but the inverse, get still returns internal ids, but its primary consumer is the unit agent.
<niemeyer> hazmat: Hmm.. that feels a bit icky
<hazmat> niemeyer, on the icky front, it also means pulling in some ensemble.unit imports into state.service
<niemeyer> hazmat: Ouch
<niemeyer> hazmat: That's a big ickiness :)
<hazmat> niemeyer, we need to dereference to properly validate
<niemeyer> hazmat: How do you mean?
<hazmat> niemeyer, well given a rel name by itself, doesn't tell us the workflow state of the  relation, which we need to verify if we want to provide feedback to the user
<hazmat> i've got the feedback aspects of the set worked out, and added some additional unit relation accessor to the unit state.
<hazmat> but the notion that set/get are not inverses is slightly odd
<niemeyer> hazmat: That seems to indicate a more serious problem in our abstractions 
<niemeyer> hazmat: ensemble.unit should be using ensemble.state
<niemeyer> hazmat: Not the other way around
<hazmat> niemeyer, oddly enough no circular imports at the moment, unit.workflow is self contained atm
<hazmat> i had to pull in lifecycle for the unit tests though
<hazmat> in general i agree state should be foundational layer
<niemeyer> hazmat: Maybe it should be moved into the state then..
<hazmat> niemeyer, it has no state manipulations.
<niemeyer> hazmat: There's stuff in ensemble.hooks which is fiddling with nodes directly too
<niemeyer> hazmat: That's another one we have to fix
<niemeyer> hazmat: Hmmm
<hazmat> niemeyer, the only stuff in hooks that manipulates state is via relationcontext atm
<niemeyer> hazmat: Yes
<hazmat> niemeyer, i don't see anything wrong with ensemble.hooks consuming a state api
<niemeyer> hazmat: Sorry, I'm on crack
<niemeyer> hazmat: I was thinking of ensemble/state/hooks.py, which is already in state
<hazmat> ah.. right
<niemeyer> Anyway, workflow.. hmmm
 * hazmat pushes his branch
<niemeyer>         yield retry_change(self._client, self.zk_state_path, update_state)
<hazmat> niemeyer its at lp:~hazmat/ensemble/resolved-state-api if your curious
<niemeyer>             data, stat = yield self._client.get(self.zk_state_path)
<niemeyer> Indeed workflow should be better integrated in the state
<niemeyer> hazmat: Checking it out
<niemeyer> hazmat: Yeah, I think we should talk about some cleaning up there indeed
<niemeyer> hazmat: The issues we're seeing in this branch are not a problem in the branch itself
<hazmat> niemeyer, i'm concerned as well about meeting the deadlines for the features entailed by the branch
<niemeyer> hazmat: It's just a side effect of some pending polishing after some organic growth which took place
<hazmat> given some refactoring work to fix things up
<hazmat> niemeyer, yeah.. the testing situation in test_service is also need a fix up
<niemeyer> hazmat: Yeah, maybe we should just merge your previous version with this specific aspect uncovered, to get the feature itself in, and then polish a bit the organization
<hazmat> the relation endpoint stuff never cleaned up some of the tests, so there are like half-dozen ways to set thigns up
<niemeyer> hazmat: For instance, it's bizarre that is_relation_running is a method
<hazmat> a function you mean and not a method?
<niemeyer> hazmat: We have a RelationStateManager, and a RelationState
<niemeyer> hazmat: Sorry, yeah
<hazmat> niemeyer, so we should have time at uds then to discuss some of these issues
<niemeyer> hazmat: I'm not sure there's a lot to debate there.. we just have to integrate that aspect better into the state
<niemeyer> hazmat: A UnitState should be aware of its workflow state.. a RelationState should be aware of its workflow state
<niemeyer> hazmat: The workflow stuff itself seems nice on itself.. it's just too isolated from the rest
<niemeyer> hazmat: The question is: how to merge these concepts nicely?
<hazmat> niemeyer,  the workflow just needs a get/set api on the state to push the workflow for zk persistence. There is an analogous api for unit relation settings data (get/set on the unit relation) although in practice that's only used by tests
<niemeyer> hazmat: I think it needs a bit more than that.. e.g. "is running" is something we should be able to inquire in a UnitState
<niemeyer> hazmat: Otherwise it becomes txzookeeper2
<hazmat> niemeyer, in that case we need to fold workflow aspects into state
<hazmat> niemeyer, so i need a path forward, i can revert back and tackle the rest review minus this comment or i can push it forward (the set change is largely done).
<hazmat> if revert, i'll file an issue regarding this for future
<niemeyer> hazmat: The former sounds like a better plan
<hazmat> niemeyer, okay.. sounds good
<niemeyer> hazmat: If we're compromising somewhere, the id issue sounds relatively minor compared to the other inconsistencies being introduced
<niemeyer> hazmat: It'd be good to sort the overall problem soonish, though
<niemeyer> hazmat: This will only get worse
<hazmat> niemeyer, agreed re id minor to the others
<hazmat> niemeyer, service.py/test_service.py are also getting a bit large.. i think the notion of encapsulating the get/set/clear/watch api into some of staterequestprotocol will help clean that up
<niemeyer> hazmat: I think we already have a good part of it encapsulated into YAMLState
<niemeyer> hazmat: This is effectively read/write/delete/watch
<niemeyer> hazmat: read/write are ready
<_mup_> Bug #772331 was filed: Incorporate workflow more closely into state <Ensemble:New> < https://launchpad.net/bugs/772331 >
<_mup_> Bug #772332 was filed: Use a StateRequestProtocol utilizing yamlstate encapsulate get/set/del/watch apis in unit state <Ensemble:New> < https://launchpad.net/bugs/772332 >
<hazmat> niemeyer, agreed, i just want to take a next step that will allow us to move the state based request/response protocols into their own classes. and expose via composition on the unit
<niemeyer> hazmat: YAMLState is already its own class
<hazmat> niemeyer, yamlstate is a good start and would be used by a protocol impl, but the usage scenarios are different, yaml state represents a concurrent r/w with dict merge, the  protocol implementations are geared more towards request/response.. internally i'm sure they could utilize a yamlstate, but they have additional notions of the api/validation/transformation they want to expose
<niemeyer> hazmat: Indeed, and that's what the methods in the respective state classes do
<niemeyer> hazmat: Note that the purpose of the state layer is precisely to give semantic meaning to what otherwise is just storage
<hazmat> niemeyer, sure, and the protocols implemenation would be in the state package,  part of my reasoning a  concern with some of the practical parts of how big the unit state class and its tests are getting.
<niemeyer> hazmat: Yeah, I understand, and I agree there are improvements to be made.. just trying to provide some feedback on them
<niemeyer> hazmat: We shouldn't end up with UnitState.resolved.set(data), UnitState.resolved.get() => data
<_mup_> ensemble/resolved-state-api r199 committed by kapil.thangavelu@canonical.com
<_mup_> revert the last commit, removing relation name args to set relation resolved
<_mup_> ensemble/resolved-state-api r200 committed by kapil.thangavelu@canonical.com
<_mup_> remove extraneous assertidentical, simplify dict merge, use equality state matching for is_x_running methods.
<_mup_> ensemble/resolved-state-api r201 committed by kapil.thangavelu@canonical.com
<_mup_> use constant instead of boolean to make the usage clearer for retry_hooks param to unit/relation resolved.
<_mup_> ensemble/ensemble-resolved r253 committed by kapil.thangavelu@canonical.com
<_mup_> use new retry hook constants
<_mup_> ensemble/expose-provisioning r215 committed by jim.baker@canonical.com
<_mup_> Collect together requested opened ports by machine for eventual provisioning changes
<_mup_> ensemble/ensemble-resolved r254 committed by kapil.thangavelu@canonical.com
<_mup_> address syntatic review comments
<_mup_> ensemble/refactor-to-yamlstate r199 committed by bcsaller@gmail.com
<_mup_> Changes per review
<_mup_> Dict copy returned again rather than YAMLState
<_mup_> Re-enabled tests of dict copies.
<_mup_> Fix remaining set_data as string calls (converted
<_mup_>  to dict)
<_mup_> ensemble/refactor-to-yamlstate r200 committed by bcsaller@gmail.com
<_mup_> merge trunk
<niemeyer> bcsaller: re.
<niemeyer> """
<niemeyer> I allow YAMLState.read(required=True) which will throw an
<niemeyer> StateNotFound exception which the higher level API can then translate
<niemeyer> """
<niemeyer> bcsaller: The point was specifically about the way in which the code is organized
<niemeyer> bcsaller: Not about the outside interface
<niemeyer> bcsaller: test + get leaves a race condition at the +
<niemeyer> bcsaller: The suggestion was to just do try: get except: do whatever
<bcsaller> gustavo: YAMLState needed to use the product of that get though and didn't currently take its state via __init__
<niemeyer> bcsaller: I don't understand.. just do it, and catch an error in case it fails?
<niemeyer> bcsaller: Every try/except involves using the product of the get
<bcsaller> I understood it as "check existence or throw an error" to "get state and use it if possible or throw an error". I made it so yamlstate can get the state and use it
<bcsaller> the get outside of yamlstate didn't avoid the race if I didn't use that data
<bcsaller> and yamlstate was set up to use the data after read
<niemeyer> bcsaller:
<niemeyer> try:
<niemeyer>     state = YAMLState(client, path)
<niemeyer> except zookeeper.NoNodeException:
<niemeyer>     # not found
<niemeyer> bcsaller: ?
<bcsaller> YAMLState.__init__ isn't async and doesn't check the node
<bcsaller> nor does read normally require it, but there were cases where the existing apis sometimes expect it or check for a specific exception
<bcsaller> in those cases this lets read raise an error if the node didn't exist
<niemeyer> bcsaller: Ok, sorry..
<bcsaller> which for YAMLState isn't normally an issue as it will try to create it on write
<niemeyer> state = YAMLState()
<niemeyer> try:
<niemeyer>     state.read()
<niemeyer>     yield state.read() # actually
<niemeyer> except zookeeper.NoNodeException: ...
<niemeyer> ?
<bcsaller> except read didn't throw the error
<bcsaller> I made that behavior optional 
<bcsaller> write would just try to create/merge to the path when it was done 
<niemeyer> bcsaller: Ok, but the equivalent..
<bcsaller> so the exception was eaten
<niemeyer> bcsaller: Hmmm
<niemeyer> bcsaller: I see what you mean, sorry
<niemeyer> bcsaller: I misunderstood the comment
<niemeyer> bcsaller: Sounds fine
<bcsaller> there is a question if that type of test is needed at all now though, if the behavior is to create the node in write 
<bcsaller> gustavo: maybe I can make the docstring clearer? if you have a suggestion
<niemeyer> bcsaller: Maybe it's not.. the problem was just the race
<niemeyer> bcsaller: I haven't seen the code, which is perhaps the issue
<niemeyer> bcsaller: (changes were not pushed)
<bcsaller> gustavo: pushed
<bcsaller> includes a trunk merge, I was rechecking it for any errors
<niemeyer> bcsaller: pulling
<niemeyer> bcsaller: Yeah, it's fine as it is, sorry for jumping to conclusions
<bcsaller> np
<bcsaller> keeping me honest ;)
<niemeyer> bcsaller: You've mentioned changes in YAMLState and it felt like the actual race wasn't addressed, which clearly is not the case
<niemeyer> bcsaller: So, removing the test.. hmmm
<niemeyer> bcsaller: Do you recall where the relation data is first initialized?
<bcsaller> you mean hook._setup_relation_data or state.relation._add_relation_state?
<bcsaller> (I didn't specify that dotpath very well there, left out the class, but I think you know what I mean)
 * bcsaller fetches coffee
<niemeyer> bcsaller: No, I mean the relation state itself
<niemeyer> bcsaller: We're assuming it exists
<niemeyer> bcsaller: So someone else must have created it
<niemeyer> bcsaller: We can't change that logic without understanding that
<niemeyer> bcsaller: Btw, it looks like none of the changes in protocol.py are required
<niemeyer> bcsaller: It's probably just left over from the previous version
<niemeyer> bcsaller: and the changes to YAMLState are also pending tests
<bcsaller> I did add one test for the required flag raising an exception
 * bcsaller checks it over again 
<niemeyer> bcsaller: Heh, yeah I'm on crack again
<_mup_> ensemble/expose-provisioning r216 committed by jim.baker@canonical.com
<_mup_> Remove requested opened ports upon a service no longer being exposed
<niemeyer> Is it just me or the mouse can't click in Natty every once in a while?
<jimbaker>  niemeyer, i have seen something like this - just gets briefly frozen up
<jimbaker> in terms of user input
<niemeyer> Yeah
<jimbaker> timely event loop processing, it's an art ;)
<niemeyer> Not just briefly actually
<niemeyer> Can barely use the trackpad
<jimbaker> for me i have seen pauses up to maybe 15s, especially when i have initially started. pretty rare however. everything else iirc seems to work in terms of normal refreshing
<jimbaker> just the input processing
<niemeyer> Is there a reset-mouse? :)
<_mup_> ensemble/unit-agent-resolved r256 committed by kapil.thangavelu@canonical.com
<_mup_> pass transition variables to action.
<niemeyer> Switching to the terminal usually works, but not this time
<hazmat`> niemeyer, restarting unity from a vterm didn't help?
<niemeyer> No, generally kills the whole thing
<hazmat`> niemeyer,  its worked for me pretty consistently (vterm, kill compiz & gnome-screensaver, unity --replace, back to desktop) 
<jimbaker> hazmat`, that definitely helped with the true freezes. need to attach that on a sticky note to my laptop ;)
<niemeyer> hazmat`: Yeah, that seems to have worked
<hazmat`> multi az/multi region mongodb replica set http://blog.mongodb.org/post/4982676520/mongodb-on-ec2-best-practices
<hazmat`> trunk has surpassed 1k tests, jimbaker you get a prize ;-)
<hazmat`> bcsaller, fwiw, i didn't get any errors on current trunk
<jimbaker> bcsaller, thanks for fixing refactor-to-yamlstate, that's working for me now
<bcsaller> jimbaker: woot
<_mup_> ensemble/unit-agent-resolved r257 committed by kapil.thangavelu@canonical.com
<_mup_> fold fire_hooks into transition-variables
<_mup_> ensemble/unit-agent-resolved r258 committed by kapil.thangavelu@canonical.com
<_mup_> unit workflow retry actions take an optional flag for fire hooks, workflow definition specifies alias for retry transitions.
<_mup_> ensemble/unit-agent-resolved r259 committed by kapil.thangavelu@canonical.com
<_mup_> lifecycle methods sans hook tests
<_mup_> ensemble/refactor-to-yamlstate r201 committed by bcsaller@gmail.com
<_mup_> revert minor changes and unused import
<hazmat`> niemeyer, got a moment?
<_mup_> ensemble/unit-agent-resolved r260 committed by kapil.thangavelu@canonical.com
<_mup_> allow unit resolved watch to stop watching.
<_mup_> ensemble/unit-agent-resolved r261 committed by kapil.thangavelu@canonical.com
<_mup_> allow unit resolved watch to stop watching.
<_mup_> ensemble/unit-agent-resolved r261 committed by kapil.thangavelu@canonical.com
<_mup_> allow unit relation resolved watch to stop watching.
<_mup_> ensemble/trunk r214 committed by bcsaller@gmail.com
<_mup_> Merge refactor to YAMLState. [r=niemeyer] [f=770448]
<_mup_> This branch refactors relation settings to use YAMLState internally. 
<_mup_> Changes relation_set commands to take a dict rather than a YAML serialized dict .
<niemeyer> hazmat`: Sorry, missed your question, and about to step out for a break with Ale
<niemeyer> hazmat`: I'll be back in 1h or so, though
<niemeyer> hazmat`: Can we talk then?
<_mup_> ensemble/expose-provisioning r217 committed by jim.baker@canonical.com
<_mup_> Minimal support for eventual consistency in provider firewalls
<hazmat`> niemeyer_bbl, tomorrow
<hazmat`> i'm out to a hadoop meetup
<jimbaker> hazmat`, have fun
<hazmat`> jimbaker, thanks
<jimbaker> reminds me, i promised to give a talk on ensemble + zk for our local hadoop/big data meetup in june
<niemeyer_bbl> hazmat`: Sounds good, enjoy!
#ubuntu-ensemble 2011-04-29
<_mup_> ensemble/config-state-manager r202 committed by bcsaller@gmail.com
<_mup_> merge trunk
<_mup_> ensemble/config-state-manager r203 committed by bcsaller@gmail.com
<_mup_> resolve conflict with merge
<_mup_> ensemble/config-state-manager r204 committed by bcsaller@gmail.com
<_mup_> backport review fixes
<niemeyer> Morning all
<_mup_> ensemble/expose-provisioning r218 committed by jim.baker@canonical.com
<_mup_> Testing on exposed service watching
<_mup_> ensemble/expose-provisioning r219 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/expose-provisioning r220 committed by jim.baker@canonical.com
<_mup_> Fix renaming of methods in dummy provider
<_mup_> ensemble/expose-dummy-provider r212 committed by jim.baker@canonical.com
<_mup_> Fixed up dummy tests (moved patch from downstream)
<_mup_> ensemble/expose-dummy-provider r213 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/config-state-manager r205 committed by bcsaller@gmail.com
<_mup_> get_config api switch out
<niemeyer_> So, ensemble.canonical.com should hopefully be live next week
<_mup_> Bug #773600 was filed: Hook scheduler should have on disk persistence <Ensemble:New> < https://launchpad.net/bugs/773600 >
<_mup_> ensemble/expose-provisioning r221 committed by jim.baker@canonical.com
<_mup_> Stop watch on service units once a service is unexposed
#ubuntu-ensemble 2011-04-30
<_mup_> ensemble/expose-provisioning r222 committed by jim.baker@canonical.com
<_mup_> More cleanup & comments
#ubuntu-ensemble 2011-05-01
<_mup_> ensemble/expose-provisioning r223 committed by jim.baker@canonical.com
<_mup_> More testing
<_mup_> ensemble/expose-provisioning r224 committed by jim.baker@canonical.com
<_mup_> More testing
<_mup_> ensemble/expose-provisioning r225 committed by jim.baker@canonical.com
<_mup_> Fixed cleanup of service states
<_mup_> ensemble/config-get r207 committed by bcsaller@gmail.com
<_mup_> protocol using new get_conifg
<_mup_> ensemble/config-get r208 committed by bcsaller@gmail.com
<_mup_> invoker code to get_config api
