#ubuntu-ensemble 2011-05-02
<_mup_> ensemble/expose-provisioning r226 committed by jim.baker@canonical.com
<_mup_> Work around the fact that the provisioning agent's state is stored in class attributes
<kim0> Morning everyone o/
<niemeyer> Good mornings!
<niemeyer> 1 WEEK!
<TeTeT> niemeyer: what for? UDS?
<niemeyer> TeTeT: Yeah :)
<jimbaker>  indeed, one week it is!
<_mup_> ensemble/unit-agent-resolved r262 committed by kapil.thangavelu@canonical.com
<_mup_> unit lifecycle resolving unit relations workflows
<jimbaker> this is interesting. if i run repeatedly with -u a test i have that looks at service watching for exposed services in the provisioning agent, then eventually it fails with an InternalTopologyError of service not found
<jimbaker> looking at the topo dump, the services dict is empty at that point
<jimbaker> i don't think this happens in my new code, but investigating further
<jimbaker> ahh, never mind... just the way the watch is run, this can happen. good
<_mup_> ensemble/expose-provisioning r227 committed by jim.baker@canonical.com
<_mup_> Fixed watch on service unit names so that the watch can run after the service topo has changed; tested 500+ tries
<niemeyer> jimbaker: Hmmm
<niemeyer> jimbaker: Is that related to the watch running after the test is done?
<niemeyer> jimbaker: This may be related to the issue I was discussing with hazmat on Friday
<jimbaker> niemeyer, yes, it is quite possible that the watch is running after the completion of the test
<niemeyer> jimbaker: Ok
<niemeyer> jimbaker: hazmat is working to avoid that kind of issue
<hazmat> niemeyer, regarding that i realized the plan formulated on friday, doesn't work, since there are no watches nesc. when sync is invoked, and there maybe background deferred's unlreated
<niemeyer> hazmat: We covered that case on friday
<niemeyer> hazmat: There shouldn't be unrelated background deferreds lingering without a connection to the test ever
<hazmat> niemeyer, my understanding is that an alternation of the client sync to do a round trip poke to flush communication channels in addition to serialized watch execution was the plan. but the issue here isn't the watch firing at all
<niemeyer> hazmat: It is an early mistake we made in the design of watches that they allow that to happen
<niemeyer> hazmat: What's the issue?
<hazmat> its the get/exists information retrieval that establishes the watch, the processing of that initial data being async to the caller is what causes the problem
<niemeyer> hazmat: I'm not following you
<hazmat> there is no background watch callback execution per se then
<niemeyer> hazmat: We went through that on friday, precisely
<niemeyer> hazmat: I explained why the "initial data being async" isn't the problem
<niemeyer> hazmat: and you agreed
<niemeyer> hazmat: We took a while to get there as well, with examples and so on
<niemeyer> hazmat: If you think this isn't true anymore, I'll need some concrete examples
<hazmat> niemeyer, my understanding of the solution that was formulated on friday, was that it would be alteration of the zk client to serialize watch callback executions, and sync would ensure none are firing currently and none pending (via roundtrip to zk).
<hazmat> further reflection on that, and its not apparent that it solves the actual issue encountered here
<niemeyer> hazmat: Yes, none are fired currently, no writes within the roundtrip of sync execute
<hazmat> because there are no watches firing in the common case
<hazmat> instead the watch callback (ours not zookeeper) is being invoked with initial data reflecting the current state
<niemeyer> hazmat: I don't understand.. the problem we were solving is watches firing in the background.  If there are no watches firing, there is no problem.
<hazmat> niemeyer, there is a problem that the processing of initial data (that was retrieved from establishing the watch) is processed async to the caller
<niemeyer> hazmat: Oh my
<niemeyer> hazmat: You keep repeating that, but I already explained several times why this isn't the problem.
<niemeyer> hazmat: and you agreed on friday.. so again, I'll need some more concrete examples if you have some new information.
<_mup_> ensemble/unit-agent-resolved r263 committed by kapil.thangavelu@canonical.com
<_mup_> unit relation lifecycles can stop/start execution independent of related unit watching.
<hazmat> niemeyer, i agreed there where commutability problems with multiple async watch creators relying on state resulting in sequencing operations. i still think those are mostly immaterial since at all times operations attempt to reflect current state. I liked the alternative solution, and said i would think about it more, i have and afaics it doesn't solve the original issue.
<niemeyer> hazmat: The original issue and the lack of ordering are actually the same issue.
<hazmat> niemeyer, as for examples that show this, the one i just brought up embodies the original issue, and it solved by this. calling sync, and doing the round trip doesn't would only guarantee no zk client watch callback activity, but that's not related to our watch api invoking its callback with data that's independent of the zk watch firing
<niemeyer> hazmat: and as we discussed on Friday, I do not agree that it's not relevant.  I don't want us to have to think about ordering in tests.
<niemeyer> hazmat: You're thinking about a very narrow problem and trying to come up with a solution for it.
<niemeyer> hazmat: The issue is general.. we have been looking at watches running in the background since we started using that convention
<niemeyer> hazmat: We have to solve the actual issue, rather than a one-off issue
<hazmat> niemeyer, we constantly think about ordering in tests, except with this problem we can't rely on ordering
<niemeyer> hazmat: No, we never think about them, ever!
<niemeyer> hazmat: We never had to say "Oh, I will create that object first so that the watch of the 3 other objects fire at the time this deferred fires"
<hazmat> niemeyer, not sure how that relates.. we setup state and behavior in tests, and wait for the behavior to fire on the state. our ordering problem is that we can wait for behavior to be in place before we modify state. at all times behavior execution reflects current state.
<hazmat> s/can/can't
<niemeyer> hazmat: Sorry, can't parse that sentence
<niemeyer> hazmat: Whenever we add a watch, the code of that watch will run in background right now when the watch fires
<niemeyer> hazmat: and that's the problem..
<niemeyer> hazmat: Solve this problem, and we don't need to "wait for first fire"
<niemeyer> hazmat: Solve first fire, and you need to ensure that *the data was in the state when it first fires*.
<niemeyer> hazmat: Otherwise what will happen is that the first fire is dumb, because the state you wanted wasn't there
<hazmat> niemeyer, they are separate issues in one case we have a zk watch firing, in the other we don't, at all times the ensemble api callback considers current state.
<niemeyer> hazmat: No, they are not separate issues.  One is a partial solution to the actual problem which requires hand-crafting ordering.  The other is the actual problem.
<niemeyer> hazmat: FWIW, I have nothing against waiting for the first state.. it sounds great, actually.
<niemeyer> hazmat: But when we start to tweak ordering to enable tests to pass because there was a watch running in the background, that will be an issue.
<hazmat> niemeyer, why doesn't the the lack of a currently executing serialized watch in the zk client invalidate the usage? we could call sync immediately after lifecycle.start() for example, and it would return while there are still ongoing  zk non-watch operations on the zk state.
<niemeyer> hazmat: "Why doesn't (...) invalidate the usage?" of what?
<hazmat> niemeyer, of the proposed solution we had friday
<niemeyer> hazmat: Which ongoing zk non-watch operations?
<hazmat> niemeyer, in this case, the establishment of unit relation workflows and lifecycles, that in turn establish watches, and fire hooks based on the current state.
<hazmat> niemeyer, in other cases its a bit simpler
<niemeyer> hazmat: I don't see how that's special in any sense?
<niemeyer> hazmat: If we wait on the end of those operations, and do a sync, it should work
<niemeyer> hazmat: wait on the non-watch operations, that is
<niemeyer> hazmat: the sync should wait for the rest to get done
<niemeyer> hazmat: This is twisted 101, and we haven't been following it appropriately.. an operation which will end in the future must return a deferred which fires when the operation is done executing.
<hazmat> like that of watch_hook_debug, the background activity is establishing the debug setup. or in debug_log its establishing internal logging. neither of those involve further watches, but they do involve get roundtrips.
<niemeyer> hazmat: watches are special, because we can't inherently wait for them because they are started out of band
<niemeyer> hazmat: That's why we need sync()
<niemeyer> hazmat: They should return deferreds, which should be waited upon by whoever called them
<hazmat> niemeyer, they do currently but only for the establishment of the watch, not the processing of the data retrieved that resulted from setting the watch.
<niemeyer> hazmat: Huh, how come?
<niemeyer> hazmat: Aren't them doing the equivalent of
<niemeyer> yield self._client.get(...)?
<niemeyer> hazmat: That effectively waits for the data to come bak
<niemeyer> back
<hazmat> niemeyer, yes, the data is retrieved but typically they do ...
<hazmat> d_get, d_watch = self._client.get_and_watch() 
<hazmat> d_get.addCallback(lambda x: ensemble_api_callback(x)) 
<niemeyer> hazmat: Do you have an example?
<niemeyer> hazmat: I mean, a pointer in the code
<hazmat> ah
<niemeyer> hazmat: In those cases, d_get should typically be returned
<niemeyer> hazmat: So that it gets chained
<niemeyer> hazmat: Since the result of the current function becomes the result of the get
<niemeyer> hazmat: and so on
<hazmat> niemeyer, ensemble.state.service.ServiceUnitState.watch_hook_debug
<niemeyer> hazmat: Looking
<hazmat> that should have yield exists_d at the end of it
<niemeyer>         exists = yield exists_d
<niemeyer> hazmat: and it does
<hazmat> niemeyer, er.. i meant yield callback_d
<niemeyer> hazmat: Nope, that's the watch, not the data as you mentioned above
<niemeyer> hazmat: Well, hmmm.. I see your point
<niemeyer> hazmat: But that's problematic, because the real watcher is being added to the chain.. we can't yield it
<hazmat> niemeyer, that's the simple case, the more complicated case is embodied in unit_lifecycle.start, process_service_changes, relation_workflow.start->relation_lifecycle.start
<niemeyer> hazmat: Nor return it
<hazmat> niemeyer, the real watcher isn't in the chain, just the establishment of the callback
<niemeyer> hazmat: This will never fire
<niemeyer> hazmat: True
<niemeyer> hazmat: Agreed this is a separate issue from synchronizing with watches
<hazmat> niemeyer, this problem exists in a more chained fashion (initial data callbacks, processing resulting in more watches, with additional data callbacks) in the lifecycle.start case
<niemeyer> hazmat: Sure, that's related to the problem of watch ordering that we've been debating
<niemeyer> hazmat: Just solving the initial data problem will not solve the fact watches will run in the background
<niemeyer> hazmat: We have to address both of them
<_mup_> ensemble/expose-provisioning r228 committed by jim.baker@canonical.com
<_mup_> Properly clean up port policy after a service is no longer exposed
<hazmat> niemeyer, in neither scenario i've posited is an actual zk watch firing.
<niemeyer> hazmat: The end goal is straightforward: yield whatever() + client.sync(), should never leave stuff hanging in the background
<niemeyer> hazmat: You said "processing resulting in more watches"
<niemeyer> hazmat: If that's not a zk watch, what is it?
<hazmat> niemeyer, more watches being established, but with those ensemble watch api callbacks being invoked with the initial data from establishing the watch
<hazmat> not from the watch itself firing
<niemeyer> hazmat: Ok, this sounds like the same problem still then?
<hazmat> niemeyer, it is just nested
<niemeyer> hazmat: deferred chaining FTW
<hazmat> and the initial execution of those is also the basis in this case for scheduling hook executions... it bears more thinking about.. 
<hazmat> i'm gonna grab a quick lunch before the standup
<niemeyer> hazmat: Cool, have a good one
<_mup_> ensemble/expose-provisioning r229 committed by jim.baker@canonical.com
<_mup_> Fix corner case on policy removal
<jimbaker> back in 10 min
<hazmat> jimbaker, sounds good
<hazmat> jimbaker, ready?
<jimbaker> bcsaller, hazmat, niemeyer - standup ?
<niemeyer> Yep
<bcsaller> yeah
<hazmat> niemeyer, one other thing regarding the resolved work... it looks like unit relations which have errors will currently automatically recover (start executing hooks again) if their containing unit is transitioned from stopped to started again
<hazmat> niemeyer, at the moment we don't ever execute stop except when we're killing the unit.. but i'm concerned what the correct behavior would look like.. i'm also looking at introducing a separate error state for unit relation workflows, but it has the same behavior on unit.start (relation automatically recovers)
<hazmat> the question is do we want that behavior
<hazmat> or force all errors to be explicitly recovered.. 
<niemeyer> hazmat: I think it's cool to auto-recover them in that situation
<niemeyer> hazmat: Stopping implies killing all relations
<hazmat> niemeyer, cool
<niemeyer> hazmat: So it's a bit like restarting the system rather than fixing it
<hazmat> niemeyer, yeah.. its a significant context switch
<niemeyer> +    def set_exposed_flag(self):
<niemeyer> bcsaller: This seems to be part of the config-state-manager branch
<bcsaller> merged trunk at some point
<niemeyer> bcsaller: Has a merge leaked in or something?
<niemeyer> bcsaller: Unless I'm doing something wrong, which wouldn't be surprising, this doesn't seem to be in trunk
<niemeyer> bcsaller: Launchpad seems to agree with me as well, given the diff there
<bcsaller> I thought that was from one of the first expose branches... I can look into it, but I didn't pull from anywhere else 
<niemeyer> bcsaller: I hope it's not our trunk in a bad state somehow :(
<bcsaller> 'else' == 'merge from anywhere but trunk'
<bcsaller> yeah
<niemeyer> bcsaller: What's the latest commit in your trunk?
<bcsaller> hmm, so when I grep local trunk it doesn't show up either... maybe it was botched at one point and fixed
<bcsaller> 214: refactor to YAMLState
<niemeyer> bcsaller: Hmm
<niemeyer> bcsaller: annotate would show it, if that was the case
 * niemeyer tries
<bcsaller> 213: Jim Baker 2011-04-27 [merge] merged expose-ensemble-commands [r=niemeyer][f=767407]
<bcsaller> which is where I thought that came from 
<hazmat> indeed it does
<niemeyer> Something pretty strange there
<hazmat> niemeyer, that method is on trunk
<niemeyer> hazmat: I know.. but the diff is still showing it as being introduced by this branch
<hazmat> hmm.. strange indeed, diff got confused perhaps
<hazmat> niemeyer, another interesting resolved issue, the on disk state, and the in memory state won't be directly recoverable anymore without some state specific semantics to recovering from on disk state, ie a restarted unit agent, with a relation in an error state would require special semantics around loading from disk to ensure that the in-memory process state (watching and scheduling but not executing) matches the recovery trans
<hazmat> ition actions (which just restart hook execution, but assume the watch continues).. this functionality added to better allow for the behavior that while down due to a hook error, the relation would continues to schedule pending hooks
<niemeyer> Definitely
 * hazmat notes the same in the code
<niemeyer> hazmat, bcsaller: It's most probably just a criss-cross that is confusing the diffing
<niemeyer> hazmat, bcsaller: Hmm.. nope
<niemeyer> hazmat, bcsaller: I think there's an actual issue
<niemeyer> bcsaller: Your branch has the code duplicated
<niemeyer> bcsaller: Probably a conflict which wasn't solved properly
<hazmat> hmm.. yeah. merge conflict issue
<bcsaller> hmm
<bcsaller> I'll see if I can fix it after the fact then. I don't recall this happening 
<niemeyer> bcsaller: You can.. just remove the incorrect lines
<niemeyer> bcsaller: Just be careful to remove the right ones, so that you're avoiding a change rather than moving code in trunk
<hazmat> bzr diff --using=meld -r 213 path_to_file.py
<hazmat> should help
<niemeyer> hazmat: Re. the recover state, yeah, that's tricky
<niemeyer> hazmat: Maybe we should stop trying to be magic about crashes
<niemeyer> hazmat: and simply introduce a e.g. "stuck" state
<niemeyer> Hmmm..
<niemeyer> We have to talk more about this
<hazmat> niemeyer, its fine if we introduce additional hooks for an on disk loader
<niemeyer> Because it involves partitioning situations and the suck
<hazmat> i mean per state hooks
<niemeyer> such
<niemeyer> (which suck, for sure :)
<hazmat> i've noted the problem in workflow.py, sans partitioning problems, for future, there's a bug tracking this functionality though afaicr
<niemeyer> hazmat: You mean starting to dump the pending relation changes too? 
<hazmat> niemeyer, yup
<niemeyer> hazmat: and what about the changes lost while in the crashed state?
<hazmat> niemeyer, good question 
<niemeyer> hazmat: I think we need a simpler strategy for "catching up" after a reestablished unit agent
<hazmat> niemeyer, i think we'll need some sort of read through of the relation and diff against the old known state to create the minimal hook execution needed to transition the local to the global state
<hazmat> sorry.. bad wording.. read through of all the related units
<hazmat> which might also work for partitioning, if we enable this for connection reconnects
<niemeyer> hazmat: Maybe.. or maybe we just need a simpler and well-understood behavior on a reestablishment
<niemeyer> hazmat: Even if it involves slightly diverging behavior from normal operation
<niemeyer> hazmat: This feels like a good topic for a session at UDS
<hazmat> niemeyer, without the diff and its application via formula hooks, its hard to see what state/hook execution guarantees we offer to formulas
<hazmat> niemeyer, sounds good
<niemeyer> hazmat: Agreed, the answer doesn't seem straightforward yet
<niemeyer> hazmat: But what I'm proposing isn't that we allow changes to simply be ignored
<niemeyer> hazmat: Maybe there are some guarantees we can give to the formula which would suffice
<niemeyer> hazmat: E.g. "Oh, if the zk session is reestablished, the -relation-changed hook is necessarily run." 
<niemeyer> hazmat: I don't know.. that may be crack, but that's the kind of idea I'm talking about
<hazmat> hmmm... but joins and departs could also be there... its worth thinking about.. but i think implementing the application of the state diff via hooks is a pretty good (if more complicated to implement) strategy, we'd get the merge of redundant events, and maintain all the same guarantees.. but we can talk at uds regarding
<niemeyer> hazmat: Joins and departs can always be acknowledged post-mortem
<niemeyer> hazmat: The nodes are effectively gone
<hazmat> niemeyer, not combined, separate. the combined case is handled by the merge which strips as redundant. 
<niemeyer> hazmat: Complex and smart logic is the kind of thing which yields major-EBS-fail kind of problem.  Not saying we shouldn't do it, but it would be awesome to have something simple and predictable.
<hazmat> niemeyer, sure as a general rule, simple beats complex, but done right this allows much simpler reasoning about state for formula authors.
<niemeyer> hazmat: Agreed, we're having some trouble to come up with complex-and-done-right, though, which is an indication that the simpler-and-predictable should still be on the table.
<hazmat> niemeyer, the notion that anytime as a formula author i might have arbitrary events happen, and i all get is a change hook call, and have to refetch all interesting state and diff within the formula to determine what happened is a problematic burden for formula authors imo...  sometimes complexity is justifiable for features.. if we can get those features in a simpler way, that sounds great as well.
<niemeyer> hazmat: You seem to be judging a proposal which I haven't made.
<niemeyer> hazmat: I'm suggesting we should still be looking for a simpler solution, rather than coming up with very complex logic which is error prone.
<niemeyer> hazmat: If we can't come up with the simpler approach, sure, let's go the difficult way
<niemeyer> hazmat: But it feels like we haven't explored the problem well enough yet
<hazmat> niemeyer, no i'm advocating for one i made, there isn't any other proposal on the table, i don't see another option for maintaining the same guarantees to formula authors... if there are simpler alternatives we should definitely consider them.
<niemeyer> hazmat: "<hazmat> niemeyer, the notion that anytime as a formula author i might have arbitrary events happen, and i all get is a change hook call"
<niemeyer> hazmat: That's what I was talking about
<hazmat> niemeyer, <niemeyer> hazmat: E.g. "Oh, if the zk session is reestablished, the -relation-changed hook is necessarily run." 
<hazmat> i realize that wasn't a proposal just an exploration, but part of my comments where addressing that
<hazmat> s/where/were
<niemeyer> hazmat: Yes, that's the *kind* of solution
<niemeyer> niemeyer> hazmat: I don't know.. that may be crack, but that's the kind of idea I'm talking about
<niemeyer> hazmat: In your proposal, what happens if the agent crashes while the state is being written?
<niemeyer> hazmat: and what is the ordering of events, to ensure that the given a relation modification occurring, the event isn't lost?
<niemeyer> hazmat: That's the kind of trickery we'll be facing
<hazmat> niemeyer, if the agent crashes and the change is not on disk, then its part of the state diff to be applied to the loaded state 
<hazmat> there's definitely stuff to be worked out.
<niemeyer> hazmat: I understand that, but the questions above apply irrespective of it
<hazmat> agreed, we should discuss it more at uds, i should get back to trying to finish resolved
<_mup_> ensemble/config-state-manager r206 committed by bcsaller@gmail.com
<_mup_> resolve merge conflict
<niemeyer> hazmat: My point is simply that it's far from easy to come up with the precise behavior to recover the exact agent state from disk, and it won't be fun to debug an improperly recovered agent which drops arbitrary events for whatever reason.
<niemeyer> hazmat: If that's what we have to do, let's do it, but I'd really like to explore some simpler alternatives on the way to it.
<_mup_> ensemble/unit-agent-resolved r264 committed by kapil.thangavelu@canonical.com
<_mup_> unit relation's transition to an explicit error state on a hook failure, and our recovered automatically if the unit restarts.
<_mup_> ensemble/config-state-manager r207 committed by bcsaller@gmail.com
<_mup_> resolve merge conflict in tests
<_mup_> ensemble/unit-agent-resolved r265 committed by kapil.thangavelu@canonical.com
<_mup_> resolved recovery from relation error state, renables hook execution only (watches still running).
<_mup_> ensemble/unit-agent-resolved r266 committed by kapil.thangavelu@canonical.com
<_mup_> if a unit relation is already running, and its not resolved, the resolved setting is cleared.
<hazmat> hmm.. i need to split this branch
<_mup_> ensemble/unit-agent-resolved r267 committed by kapil.thangavelu@canonical.com
<_mup_> pull the resolved watch in the unit agent for a future branch.
<_mup_> ensemble/expose-provisioning r230 committed by jim.baker@canonical.com
<_mup_> More testing
<_mup_> ensemble/trunk-merge r189 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> ensemble/expose-provisioning r231 committed by jim.baker@canonical.com
<_mup_> Cleanup
<_mup_> ensemble/resolved-state-api r202 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk and pep8ify it.
<_mup_> ensemble/expose-provisioning r232 committed by jim.baker@canonical.com
<_mup_> PEP8
<_mup_> ensemble/formula-upgrade r269 committed by kapil.thangavelu@canonical.com
<_mup_> doc cleanup
<hazmat> jimbaker,  test_watch_exposed_flag_waits_on_slow_callbacks seems to hang for me regularly.
<hazmat> bcsaller, the merge diff on your branch looks strange, now it looks like it removes ensemble-expose on trunk
<hazmat> niemeyer, is there a hook into mup for reporting builds from an external system like jenkins?
<niemeyer> hazmat: It should be very easy to send arbitrary data to mup
<niemeyer> hazmat: Using the same mechanism we use for sending commit reports
<niemeyer> hazmat: It's a line based protocol with a username/password prefix
<hazmat> niemeyer, cool
<niemeyer> hazmat: echo + socat should be able to do it even via shell
<niemeyer> biab
#ubuntu-ensemble 2011-05-03
<niemeyer> Starting to fade
<niemeyer> I think signatures are fully working
<_mup_> ensemble/unit-agent-resolved-part-two r269 committed by kapil.thangavelu@canonical.com
<_mup_> unit agent resolving units.
<_mup_> ensemble/unit-agent-resolved-part-two r270 committed by kapil.thangavelu@canonical.com
<_mup_> fix a typo in the stop transition action
<_mup_> ensemble/unit-agent-resolved-part-two r271 committed by kapil.thangavelu@canonical.com
<_mup_> round out unit agent resolved test cases.
<_mup_> Bug #776014 was filed: unit agent needs support for resolving units <Ensemble:In Progress by hazmat> < https://launchpad.net/bugs/776014 >
<hazmat> okay.. that's the last branch of the resolved work.
 * hazmat calls it a night
<_mup_> ensemble/config-set r209 committed by bcsaller@gmail.com
<_mup_> wip to show kapil
<_mup_> ensemble/config-set r210 committed by bcsaller@gmail.com
<_mup_> fix typo
<hazmat> bcsaller, this is what i was thinking of for the test api https://pastebin.canonical.com/47020/
<bcsaller> that looks like a good start
<_mup_> ensemble/expose-provisioning r233 committed by jim.baker@canonical.com
<_mup_> More work on testing cleanup and handling scenarios occurring at that time
<_mup_> ensemble/expose-provisioning r234 committed by jim.baker@canonical.com
<_mup_> Refactored callbacks
<_mup_> ensemble/expose-provisioning r235 committed by jim.baker@canonical.com
<_mup_> Refactored callbacks
<_mup_> ensemble/config-state-manager r208 committed by bcsaller@gmail.com
<_mup_> reapply expose stuff, shouldn't be here, but this fixes the issue
<_mup_> ensemble/expose-provisioning r236 committed by jim.baker@canonical.com
<_mup_> Properly expose services if provisioning agent restarts
<kim0> Morning everyone
<_mup_> ensemble/config-get r209 committed by bcsaller@gmail.com
<_mup_> still dealing with bad merge from before, forward prop changes
<_mup_> ensemble/config-set r212 committed by bcsaller@gmail.com
<_mup_> check point config set
<kim0> niemeyer: Morning o/ 
<hazmat> g'morning folks
<hazmat> niemeyer, i think that's the last branch for resolved in review
<kim0> hmm why is it that my merges to other projects like https://code.launchpad.net/~ubuntu-branches/ubuntu/natty/logrotate/natty
<kim0> appear with my name on them, while to ensemble, the reviwers name is put instead
<kim0> not that I care much, but new members will probably be too excited seeing their own name besides their formulas and stuff
<hazmat> kim0, did you commit the merge on the other projects?
<kim0> hazmat: no I don't have upload rights
<kim0> I think Daviey did for logrotate
<hazmat> kim0, we have this commit message notation syntax for when doing merges on behalf of other users [a=kim0]... does seem a little strange
<hazmat> that bzr would pull the commit as a merge in one case.. and pull the underlying commits in the otehr
<kim0> yeah :/
<hazmat> kim0, it looks like that was actually committed by a daemon, the committer and author metadata at the bzr level is different
<hazmat> ah.. ic.. you can pass --author to bzr commit
<hazmat> yeah.. we should start using that
<kim0> hazmat: so that is something you guys would need to do, not the person creating the branch right
<hazmat> kim0, yup.. in the case of that logrotate branch it looks like some automated process kicked in and did the merge after review
<kim0> cool .. @everyone please do that then while merging :)
<kim0> thanks hehe
<kim0> Another thingie, I'm starting a new little doc on writing and contributing an Ensemble formula
<kim0> for that, I need a simplistic "hello world" style formula
<kim0> any suggestions ?
<kim0> I think someone mentioned there could be some user formula for adding/deleting users .. that sounds simply enough, but not sure how it'd work
<kim0> maybe a motd formula!
<hazmat> kim0, that's machine config, not service deployment.
<kim0> well yeah I understand .. mm
<hazmat> kim0, i agree its something we should have some facilities for, machine level policies
<kim0> I mean it can be done today right ?!
<kim0> it's just that we don't want to huh
<kim0> so if an admin wants to create users (or change motd) .. do we say ensemble is not for that 
<niemeyer> hazmat, kim0: Good morning!
<niemeyer> hazmat: That's awesome
<niemeyer> hazmat: I'll have a full pass today
<hazmat> niemeyer, great
<niemeyer> Auth actually works, btw!
<hazmat> niemeyer, i'm thinking i should do some work with ben on the service-config.. else i can dig back into the debug-hook stuff
<niemeyer> hazmat: http://pastebin.ubuntu.com/602773/
<niemeyer> hazmat: Sounds good, it feels like he'd benefit from a hand on the hooks part
<hazmat> niemeyer, cool
<hazmat> re auth
<niemeyer> hazmat: Yeah, it turned out pretty neat.  This enables us to authenticate people in the repo
<niemeyer> hazmat: Sending them back and forth to Launchpad for authentication
<hazmat> kim0, they can add users as part of managing a service
<hazmat> kim0, for example someone doing a heroku clone, might deploy a service formula per user, and the individual units could create a new user representing that customer on a machine for launching the appserver...
<kim0> Yeah, got it .. so what's a good hello-world formula, that hopefully doesn't involve complex config files ..etc
<hazmat> kim0, one that prints 'hello world'.. really though if you want to just have it add a user its fine.. just overkill
<hazmat> kim0, i'd probably do one that uses a packaged service.. like take nginx or apache as an example.. as it provides enough functionality to grow the formula over multiple tutorial parts. 
<_mup_> Bug #776426 was filed: Add debug hook cli flag for deploy and and add-unit. <Ensemble:New> < https://launchpad.net/bugs/776426 >
<jimbaker> i should be available when there, but just in case i'm silent: i need to take my toyota to the dealer
<hazmat> bcsaller, just sent you an email regarding the remaining stuff i could think of for service-config.. give me a ping when you've got a moment
<hazmat> bzr cli parsing does seem a bit nicer than argparse
<jimbaker> hazmat, i think subcommand support was never really completed in argparse. and that's where the comparison w/ bzr is pretty obvious
<hazmat> jimbaker, subcommands are fine, its the help/output formatting usage and customization that's atrocious imo
<jimbaker> hazmat, but that is part of what it should be - good help for subcommands
<jimbaker> makes sense that in combining functionality - help + subcommands, for example - would should the most gaps in something like argparse
<bcsaller> hazmat: I realized last night that some of the files on that branch were not commited. Hopefully you were able to pull them when you had a look this morning?
<hazmat> bcsaller, i had a look and setup a pipeline, but there where a lot of conflicts with trunk
<hazmat> bcsaller, any chance you could resolve those and push an updated version? i'd like to use a clean branch point to start with
<bcsaller> I can try to remerge trunk in the last phase of the pipeline, sure. I didn't realize
<bcsaller> there is no conceptual conflict so it doesn't make sense 
<koolhead17> hi all
<hazmat> bcsaller, i've found typically its better to merge at the start of the pipeline (i use a trunk-merge pipe) and then pump the changes through
<bcsaller> yes, but the start of my pipeline was already merged 
<bcsaller> I don't know if thats the source of the issues 
<hazmat> bcsaller, you can add a secondary merge-trunk pipeline after that
<bcsaller> ahh
<bcsaller> I'm happy to try that 
<hazmat> bcsaller, out of curiosity what's the name of the hook (for config-changed)?
<bcsaller> config-changed
<bcsaller> later part of the spec has it
<hazmat> bcsaller, cool, i'm starting off with a fresh trunk branch for the moment
<hazmat> bcsaller, what's that assessment of parts that are ready or almost ready accurate?
<hazmat> s/was that
<bcsaller> minus any of the merge issues they are fine, I'll try merging trunk again and seeing if it breaks things or not
<_mup_> ensemble/service-config-unit-lifecycle r215 committed by kapil.thangavelu@canonical.com
<_mup_> unit lifecycle.configure method for processing service configuration changes.
<_mup_> ensemble/expose-provisioning r237 committed by jim.baker@canonical.com
<_mup_> Comments and method rename to indicate (somewhat more) private api
<hazmat> i'm not entirely clear what the error handling looks like for configure errors should look like.
<hazmat> bcsaller, any thoughts on what that behavior should be?
<_mup_> ensemble/expose-provisioning r238 committed by jim.baker@canonical.com
<_mup_> Removed eventual consistency support for incorporation into a later branch
<bcsaller> let me see if I covered that in the spec, one sec
<bcsaller> " Errors in the config-changed hook force ensemble to
<bcsaller> assume the service is no longer properly configured. If the service is
<bcsaller> not already in a stopped state it will be stopped and taken out of
<bcsaller> service"
<hazmat> stopped and taken out of service are slightly different, but easy enough.. it will mean missed hook executions though for relations, if its brought back online
<_mup_> ensemble/expose-provisioning r239 committed by jim.baker@canonical.com
<_mup_> Test comments
<_mup_> ensemble/config-get r210 committed by bcsaller@gmail.com
<_mup_> better merge resolution, less juggling
<jimbaker> this branch is finally getting into a good state for review. i am relieved
<jimbaker> just need a few more tests...
<_mup_> ensemble/service-config-unit-lifecycle r216 committed by kapil.thangavelu@canonical.com
<_mup_> unit workflow support for service configuration
<_mup_> ensemble/config-set r213 committed by bcsaller@gmail.com
<_mup_> Better ptests in Makefile, sensitive to new files properly
<_mup_> Bug #776596 was filed: unit workflow and lifecycle must support service configuration <Ensemble:New for hazmat> < https://launchpad.net/bugs/776596 >
<hazmat> bcsaller, just pushed  the lifecycle/worfklow changes for review based on trunk, i'm going to switch tracks back to debug-hook, unless you'd like me to work on the watch_config_changed stuff?
<bcsaller> you're fast
<bcsaller> I'll look it over, but thats very helpful. Work on the debug hook
<hazmat> bcsaller, practice makes perfect.. there's some work to be done wrt to resolved, but that should probably land first
<_mup_> Bug #776605 was filed: Hooks need access to service options <Ensemble:New for bcsaller> < https://launchpad.net/bugs/776605 >
 * hazmat bangs head against merge
 * niemeyer switches to reviewing mode
<niemeyer> > +class ServiceUnitRelationResolvedAlreadyEnabled(StateError):
<niemeyer> > +    """The unit has already been marked resolved.
<niemeyer> > Documentation needs updating.
<niemeyer> how so?
<niemeyer> hazmat: s/unit/relation/
<niemeyer> hazmat:
<niemeyer> """
<niemeyer> It is actually making a request for the problem to be solved, not solving the 
<niemeyer> problem directly, as there are behavioral aspects to transitions that need be 
<niemeyer> done inside of the unit agent.
<niemeyer> """
<niemeyer> hazmat: That's not the case, last we spoke
<niemeyer> hazmat: resolved means "I have resolved the problem." (hence the 'd')
<niemeyer> hazmat: The problem as in "the reason why this hook failed"
<niemeyer_> Argh..
<niemeyer_> What was the last message I sent?
<hazmat> niemeyer, failed."
<niemeyer> hazmat: Ok, so it did went through, thanks
<niemeyer> hazmat: I ended up just providing better organized comments in the merge proposal
<niemeyer> hazmat: So feel free to ignore the IRC stuff
<hazmat> niemeyer, resolved is a request to make something happen.. 
<niemeyer> hazmat: Yes, to allow the relation/unit to go ahead
<hazmat> its marking the unit as resolved, but the actual resolution happens on the agent
<niemeyer> hazmat: It's a statement made by the user that the problem has been resolved
<niemeyer> hazmat: No..
<niemeyer> hazmat: The resolution has already happened
<niemeyer> hazmat: We're simply telling the agent "go on"
<hazmat> niemeyer, it hasn't till the agent processes the resolved request
<niemeyer> hazmat: I hope one day we can make agents which resolve problems, though ;-)
<niemeyer> hazmat: It has happened
<niemeyer> hazmat: That's why the command is "resolved" and not "resolve"
<hazmat> niemeyer, the request has been made, but not acted upon
<niemeyer> hazmat: The request to go on, yes
<hazmat> yes
<niemeyer> hazmat: Not the request to resolve the problem which caused the script/hook to fail
<hazmat> but till its processed by the agent, the goal is not accomplished.
<niemeyer> hazmat: I'm banging on that because that's the idea we have to provide to users
<niemeyer> hazmat: Agreed.. it's just terminology
<niemeyer> hazmat: The goal of letting the agent continue normal operation
<niemeyer> hazmat: Not the goal of resolving the problem.. that one must have been done already
<niemeyer> hazmat: (out of band)
<hazmat> right
<niemeyer> Is it just me or Launchpad fonts seem to be a bit less readable lately?
<niemeyer> Setting the font color to black and increasing them a bit does wonders
 * niemeyer <= getting old
<_mup_> ensemble/expose-provisioning r240 committed by jim.baker@canonical.com
<_mup_> Tests for watch_service_states
<_mup_> ensemble/unit-agent-resolved r269 committed by kapil.thangavelu@canonical.com
<_mup_> update unit tests to use relation workflow accessor on lifecycle.
<hazmat> niemeyer, the resolved branch is ready for review  (fixed the accessor issue)
<niemeyer> hazmat: Sweet, thanks
<niemeyer> Y.all("body")
<niemeyer> EWINDOW
<_mup_> ensemble/debug-hook-scope-guard r210 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk and resolve conflict.
<hazmat> niemeyer, regarding the relation resolved stuff, i realized that the retry flag argument to the unit relations doesn't have any real meaning, as it just places them back into an operational state, it doesn't rexecute a failed hook.
<niemeyer> hazmat: Hmmm, good point
<niemeyer> hazmat: Although, shouldn't it reexecute?
<hazmat> niemeyer, it doesn't at the moment, we don't save enough contextual information regarding the hook to do so, nor do we have a means to populate it artificailly in the context.. we could if we want to enable it but i think  its probably future work for such.. for now i'd just raise an error if a unit rel with retry specified
<niemeyer> hazmat: Sounds good
<hazmat> niemeyer, also regarding the debug-hook stuff, i've looked it and looked over the review, i tried out the screen user multiplexing but it would require suid, the sudo implementation works well
<hazmat> niemeyer, not sure that there's anything else to do for it.. 
<hazmat> niemeyer, https://code.launchpad.net/~hazmat/ensemble/debug-hook-scope-guard/+merge/54433
<hazmat> it will be nice to incorporate that functionality into other subcommands (deploy, add-unit, resolved --retry) with a --debug-hook option, also makes debugging install/start much easier.
<niemeyer> hazmat: I'll check it out
<niemeyer> bcsaller: I'm wondering a bit about the caching behavior of that get_config() method
<niemeyer> bcsaller: Is this really what we want to do?
<niemeyer> bcsaller: It means get_config() never updates the state read
<niemeyer> bcsaller: I wonder if we should simply return a new YAMLState on every get_config()
<niemeyer> bcsaller: How would that affect existing code?
<niemeyer> hazmat: ^ You may have some input on this as well?
<bcsaller> that was to provide a consistent view throughout hook execution
<niemeyer> bcsaller: The consistent view is more than throughout hook execution.. it never updates
<niemeyer> bcsaller: (remember, this is the ServiceState type)
<bcsaller> but the lifetime of the service manager is scoped to the hook, if thats not the case then it should change 
<hazmat> just stepping back in
<bcsaller> my understanding is these managers are created as needed
<niemeyer> bcsaller: So ServiceState.get_config() effectively returns the configuration of some arbitrary point in the past, when whoever first called it
<hazmat> so we want it to update, but be cached in the hook context for hook execution
<bcsaller> if the service manager always returns a new one thats fine, then the hook needs to cache it 
<niemeyer> bcsaller: Creating a whole new manager to update the state of the config feels a bit like things are reversed
<niemeyer> hazmat: Yeah, that's what it feels to me too
<niemeyer> bcsaller: +1
<niemeyer> bcsaller: Otherwise it really becomes random..
<hazmat> ServiceState.get_config should do no caching
<bcsaller> simple change
<hazmat> cool
<niemeyer> bcsaller: The hook itself can't know when it was first read
<hazmat> we don't have any caching in our state apis
<niemeyer> bcsaller: Maybe someone else called get_config() for whatever reason, and the "snapshot" becomes ancient even before it should
<hazmat> minus the hook context
<niemeyer> bcsaller: Please test that non-caching behavior
 * hazmat wonders what to work on
<niemeyer> bcsaller: +1, besides that
<niemeyer> hazmat: Oh ho ho.. I has ideas..
<niemeyer> hazmat: ;-)
<hazmat> niemeyer, i thought you might, just looking over the milestone bugs atm
<niemeyer> bcsaller: It feels like this branch is almost getting to a one-liner :-)
<hazmat> niemeyer, i'm probably going to have some more to do, after the reviews land... but i think i might manage another small task
<niemeyer> hazmat: Yeah, a small task might require some further thinking
<hazmat> i've got a couple from the top of the milestone, cli beautification and bootstrap/shutdown with environmental awareness
<hazmat> niemeyer, did you have one in particular in mind?
<niemeyer> hazmat: Nah.. we already talked about them.. they're too large for this week
<niemeyer> hazmat: Well.. hmmm
<hazmat> yeah.. those require some thinking, i could at least tackle it for the simple case
<niemeyer> hazmat: Here is an idea: implement an auto-dependency-resolution hack.
<hazmat> and leave the lifecycle.start nested case for  post budapest
<hazmat> niemeyer, i get to HACK?! :-)
<hazmat> cool
<niemeyer> hazmat: Yeah, like, totally simple and dirty
<hazmat> i'll see what i can do there
<niemeyer> hazmat: Something specific for UDS, which we remove afterwards
<hazmat> niemeyer, sounds good
<_mup_> ensemble/config-state-manager r209 committed by bcsaller@gmail.com
<_mup_> get_config shouldn't cache inside service layer
<_mup_> ensemble/config-get r212 committed by bcsaller@gmail.com
<_mup_> added test verify cached copy is returned on get_config in hook
<niemeyer> Hmmm
<niemeyer> I have to run an errand.. will do that now that there's still sunlight and will be back to reviewing in 1.5h or so.
<hazmat> niemeyer, cheers
<hazmat> i'm gonna roll out in a few as well for 1.5hr
<_mup_> ensemble/expose-provisioning r241 committed by jim.baker@canonical.com
<_mup_> Tests for ServiceState.watch_service_units
<hazmat> hasta luego
<niemeyer> That was faster than expected
<_mup_> ensemble/trunk r215 committed by bcsaller@gmail.com
<_mup_> Service state changes supporting configuration options [r=niemeyer] [f=746695]
<_mup_> Introduces get_config to ServiceState returning a YAMLState object.
<hazmat>   
<_mup_> ensemble/expose-provisioning r242 committed by jim.baker@canonical.com
<_mup_> Test get_opened_ports, watch_opened_ports
<hazmat> backs
<hazmat> this auto dep stuff has one unforseen requirement.. i need to track the deps pairwise to do the relation work
#ubuntu-ensemble 2011-05-04
<niemeyer> hazmat: I don't follow
<niemeyer> But have to step out to pick up Ale now
<niemeyer> hazmat: Let's talk tomorrow
<_mup_> ensemble/expose-provisioning r243 committed by jim.baker@canonical.com
<_mup_> PEP8
<_mup_> ensemble/expose-provisioning r244 committed by jim.baker@canonical.com
<_mup_> More PEP8
<_mup_> ensemble/expose-provisioning r245 committed by jim.baker@canonical.com
<_mup_> Merge trunk and resolved conflicts
<jimbaker> hazmat, it does seem like we are increasingly have random failures due to watches
<hazmat> jimbaker, there not random
<hazmat> they have causes, and tests need to account for it
<hazmat> jimbaker, if your not clear on why its happening, do a self.capture_output()
<jimbaker> sounds good. so what i'm seeing it is in mostly around cleanup. do you have any good suggestions for avoiding zookeeper.ClosingException: zookeeper is closing exceptions?
<hazmat> jimbaker, they show up randomly in tests, but a broken test in this context, is pretty much always broken.. 
<hazmat> jimbaker, it helps to understand which activities cause background operations that need to be synced
<hazmat> jimbaker, lifecycle.start() is the primary offendor
<hazmat> jimbaker, you can do a yield self.sleep(0.1) or do it early in your test if has any appreciable time.. or do a yield lifecycle.stop() before the end of the test
<hazmat> jimbaker, it depends on the context
<hazmat> jimbaker, on trunk i saw a particular test fail regularly from the endpoint stuff the slow watch callback
<jimbaker> hazmat, so i'm setting this in a variety of tests: test_watch_exposed_flag_waits_on_slow_callbacks, iirc test_watch_relations_may_defer
<hazmat> er.. not endpoint but exposed
<jimbaker> hazmat, so we agree on that
<jimbaker> hazmat, i have been adding sleeps to avoid, and it certainly helps
<jimbaker> just not 100%
<jimbaker> certainly it's much more likely with -u, that's a good way to see them
<jimbaker> but if sleep is our best solution for stuff not involved in lifecycles... is that really a solution?
<jimbaker> hazmat, seems like getting watches right would be very useful for next week
<jimbaker> hazmat, the other thing is that my current testing of service exposing may have some similarity to the lifecycle testing because it asserts complete cleanup at the end of each test, which seems to be hard with the current watch setup
<hazmat> jimbaker, there are lots of examples
<hazmat> jimbaker, i agree 
<hazmat> jimbaker, its not really a solution, stopping the lifecycle also works, or waiting on a hook execution, it really depends on the context
<jimbaker> hazmat, agreed on that
<_mup_> ensemble/expose-dummy-provider r214 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<hazmat> jimbaker, getting to niemeyer to understand took much longer than i thought.. i'll see if i can take care of it while we're in budapest, and we can clean up tests incrementally
<jimbaker> hazmat, sounds like a good plan
<_mup_> ensemble/expose-dummy-provider r215 committed by jim.baker@canonical.com
<_mup_> Comments
<_mup_> ensemble/expose-dummy-provider r216 committed by jim.baker@canonical.com
<_mup_> Fix upstream changes
<_mup_> ensemble/expose-provisioning r246 committed by jim.baker@canonical.com
<_mup_> Merged trunk & resolved conflicts
<_mup_> ensemble/expose-provisioning r247 committed by jim.baker@canonical.com
<_mup_> PEP8
<_mup_> ensemble/expose-provisioning r248 committed by jim.baker@canonical.com
<_mup_> Cleanup
<hazmat> jimbaker, where you able to fix the tests that failed on trunk in one of those branches?
<jimbaker> hazmat, i was not
<jimbaker> hazmat, i do know if i increase the sleep time, i see less failures
<jimbaker> but having 2s sleep or whatever seems crazy
<hazmat> jimbaker, a better way is to find out what the background activity is, and create a test helper that allows you to observe/sync on some post condition of the background activity
<hazmat> jimbaker, what's it doing in the background in expose watch?
<jimbaker> hazmat, i have a pretty careful record of the background activity in terms of the logging
<jimbaker> in terms of removing service units for example, and how that cascades through
<jimbaker> and as i mentioned, it is more than this new code, it seems common across watches
<hazmat> jimbaker, try adding a yield callback at the bottom of watch_exposed_flag
<jimbaker> hazmat, that only makes it worse it seems
<hazmat> jimbaker, what branch?
<hazmat> and what test?
<jimbaker> lp:~jimbaker/ensemble/expose-provisioning, ensemble.agents.tests.test_provision
<jimbaker> hazmat, taking off for now, but definitely appreciate if you find anything. thanks!
<hazmat> jimbaker, cool have a good one, fwiw it helps to solve these problems in reverse, at the higher level you get multiple issues nesting, at the lower level you have some hope of sanity building on re-fortified substrates
<_mup_> ensemble/auto-dependency-resolution r215 committed by kapil.thangavelu@canonical.com
<_mup_> auto dependency resolver, solves for dependent formulas to be deployed (taking into account what's available in the environment), and new relations that need to be created.
<_mup_> ensemble/config-get r213 committed by bcsaller@gmail.com
<_mup_> test config_get communications w and w/o option_name
<_mup_> ensemble/config-get r214 committed by bcsaller@gmail.com
<_mup_> cleanup config_set testing method and related config_get->config test streamlining
<_mup_> ensemble/config-get r215 committed by bcsaller@gmail.com
<_mup_> test for name/service lookup functions on hook
<_mup_> ensemble/config-get r216 committed by bcsaller@gmail.com
<_mup_> prune unused method and exception
<_mup_> ensemble/config-get r217 committed by bcsaller@gmail.com
<_mup_> pep8
<kim0> morning everyone
<kim0> hmm the joined hook is still not in docs ? http://people.canonical.com/~niemeyer/ensemble/formula.html#hooks
<hazmat> kim0, ugh.. it should be
<hazmat> everyone's been been heads down implementing
<hazmat> niemeyer, got a decent auto resolver implementation working
<niemeyer> hazmat: Oh, sweet!
<hazmat> niemeyer, there are some broken tests in trunk.. which concern me though
<hazmat> ./test -u ensemble.state.tests.test_service.ServiceStateManagerTest.test_watch_exposed_flag_waits_on_slow_callbacks
<hazmat> will end up hanging a terminal hard for me
<hazmat> still looking
<niemeyer> Huh
<hazmat> hmmm.. actually most of the slow watch callbacks can do a hang
<_mup_> ensemble/auto-dependency-resolution r216 committed by kapil.thangavelu@canonical.com
<_mup_> test plan, better logging, return formulas objects instead of formula names.
<hazmat> niemeyer, if your want to take have a look at auto resolve.. its pretty much contained to one file. http://bazaar.launchpad.net/~hazmat/ensemble/auto-dependency-resolution/view/head:/ensemble/formula/resolver.py
<niemeyer> hazmat: Reading
<niemeyer> hazmat: {
<niemeyer> 111	
<niemeyer>             "required_by": [(formula_name, dep_name, service_name)],
<niemeyer> 112	
<niemeyer>             "provided_by": None}
<niemeyer> hazmat: This should be a proper type
<hazmat> niemeyer, yeah.. there's a todo at the top
<hazmat> to use named tuples for all the internal data structs
<hazmat> niemeyer, its very much in an early state ;-)
<niemeyer> hazmat: Yeah.. :)
<niemeyer> hazmat: It looks pretty good
<niemeyer> hazmat: As a hack :)
<hazmat> its nice to remember what's its like
<niemeyer> hazmat: Gives an idea of the kind of trouble we're getting into for the full blown implementation
<hazmat> niemeyer, indeeds, its a very nice exercise for that.
<niemeyer> hazmat: "depth" provides the wrong idea there
<niemeyer> hazmat: This is generally used for recursive algorithms
<hazmat> niemeyer, its tree depth for the resolution
<hazmat> niemeyer, indeed.. most dep graph solving is done as a dag
<niemeyer> hazmat: and it's going backwards
<hazmat> niemeyer, how so?
<niemeyer>         depth -= 1
<niemeyer> :)
<hazmat> niemeyer, yeah.. that should be cleaned up.. probably just fix the condition
<hazmat> and the name
<niemeyer> hazmat: Nice work man
<niemeyer> hazmat: It's fantastic we'll have something like that in place for experimenting with
<niemeyer> hazmat: Does it work? :-)
<hazmat> niemeyer, indeed it will be fun to show
<hazmat> niemeyer, i have no idea.. its anti-tdd ;-)
<hazmat> tbd
<niemeyer> hazmat: :-)
<jimbaker> kim0, http://people.canonical.com/~niemeyer/ensemble/formula.html is not current against trunk
<niemeyer> jimbaker: It's not?
<jimbaker> niemeyer, it's not. the clue for me was seeing "monothonically", which while pythonic in sound, is not a word ;)
<jimbaker> (i fixed that typo a while ago)
<hazmat> yeah.. trunk is indeed different 
<hazmat> by like a several weeks it looks
<kim0> This hasnt been merged http://bazaar.launchpad.net/~jimbaker/ensemble/sandbox-trunk-r200/revision/200
<niemeyer> jimbaker: Oops.. :)
<jimbaker> kim0, not certain what you mean by that... that's a sandbox
<niemeyer> let me check that
<kim0> mm .. then I'm misunderstanding
<jimbaker> specifically i just needed something i could deploy on aws, and not knowing how to specify a revision in the bzr url, i just did it that way
<jimbaker> kim0, i will delete it since we are no longer trying to figure out what happened when we could no longer deploy to aws
<niemeyer> Hmm.. so the branch is up-to-date
<niemeyer> Ugh.. the docs are clearly not
<kim0> niemeyer: would be great if you'd merge my user docs branch too :)
<niemeyer> kim0: Yeah, I plan to handle that today still
<niemeyer> kim0: Thanks for the changes, btw
<kim0> cool yw
<kim0> writing a contributing your first formula doc now
<kim0> is using 'echo' inside formulas an acceptable way to communicate info with user, or should ensemble-log almost always be used ?
<kim0> coz principia templates use echo, if we don't like that, I'll change them all to ensemble-log
<niemeyer> jimbaker, kim0: Updated
<niemeyer> Should continue to update automatically now
<niemeyer> I've changed it so that it kills the previous version, rather than trying to compile just the difference
<niemeyer> I think something around that wasn't working properly
<kim0> why do we have a broken relation, but not established
<niemeyer> kim0: The best answer is that we haven't missed it
<kim0> :)
<kim0> that I'm sure of hehe
<niemeyer> kim0: In the case of broken, there are obvious things we can do when the service goes disconnected that there's no other place to do
<niemeyer> kim0: In the case of established, we have good alternatives, such as start
<niemeyer> kim0: But there's no inherent reason why we shouldn't have it
<kim0> ok makes sense
<niemeyer> kim0: If someone comes up with "Oh, I'd like to have established to handle that specific use case."
<niemeyer> kim0: We can easily add it
 * kim0 nods
<hazmat> there where handshaking difficulties as i recall, and some questions as to the meaning/utility of established without a join. 
<hazmat> ie. a one sided relation
<hazmat> which join nesc. imparts its not, thus serving as a valid point of establishment for two services to communicate
<hazmat> it can be added though
<hazmat> trippy.. ensemble/mine/watching-godot$ make
<hazmat> You've just watched the fastest build on earth.
<niemeyer> :-)
<hazmat> so gdb shows the hard lock in zk close, looks like something for upstream
<niemeyer> Our docs have some issues building apparently..
<niemeyer> Let me look at that
<niemeyer> hazmat: Hm
<niemeyer> hazmat: That brings me memories
<niemeyer> hazmat: I think I've heard about a locked zk close somewhen
<niemeyer> Argh
<niemeyer> Sphinx is doing pretty weird things :(
<niemeyer> It breaks a line with "control-bucket", but not one with "default-instance-type"
<niemeyer> Ok, no warnings anymore
<kim0> Our meeting kicking in 2 hours .. 
 * kim0 rings a little shiny bell
<niemeyer> :)
 * koolhead17 pokes kim0
<kim0> koolhead17: hey o/
<kim0> koolhead17: you've been hiding lately huh
<koolhead17> kim0: howdy? 
<kim0> All going good 
<kim0> you all good ?
<koolhead17> kim0: am awesome!!
<kim0> great :)
<koolhead17> just trying to not get distracted from real work during working hours in office :D
<koolhead17> kim0: and yes working on increasing my launchpad karma!! :D
<kim0> koolhead17: hehe
<kim0> koolhead17: played with ensemble yet ?
<koolhead17> kim0: am too occupied with some other stuff natty related. even after office hours. after hitting my head against dhcp server for 48 hours i filed a bug apparmor and stopping execution of dhcpd 
<kim0> hehe
<kim0> bugs can sure be fun
<koolhead17> kim0: yeah when you are deploying something new and you have no documentation supporting you :)
<koolhead17> hi hazmat TeTeT niemeyer
<kim0> what were you deploying
<koolhead17> cobbler :P
<kim0> a ha 
<kim0> koolhead17: did that include manual steps ?
<koolhead17> well am half way mark, able to get PXE install via cobbler
 * kim0 wondering if koolhead17 should wrap his knowledge into a cobbler ensemble formula :)
<koolhead17> few things in default.ks are hardcoded so now have to bang against wall 2morrow figuring out if i have to manually setup local repo and rsync it
<kim0> while cobbler is not really a cloud service, I can still see value
<koolhead17> kim0: cobbler is for cloud 4 sure with koan :D
<kim0> oh! cool :)
<kim0> at least when openstack is a deploy target for ensemble, it should definitely make sense IMO
<niemeyer> koolhead17: Hey!
<niemeyer> hazmat: Any luck on the lockup problem?
<koolhead17> kim0: you will be surprized. cobbler has awesome GUI interface for everything but still i simply followed the command line option :D
<koolhead17> i found GUI too confusing :)
<kim0> koolhead17: yeah I'm kinda like that as well .. gui is for sissies :)
<koolhead17> hehe
<koolhead17> thaks to "zul"  blog
<koolhead17> *thanks
<kim0> koolhead17: I've written a largish user level doc for ensemble .. check it out (all the green block text at the end) https://code.launchpad.net/~kim0/ensemble/user-tutorial-and-FAQ/+merge/58861
<kim0> last time you wanted to get kickstarted I remember
<koolhead17> kim0: yup
<hazmat> niemeyer, no, got some lunch
<koolhead17> kim0: are you going for that developer summit?
<koolhead17> ubuntu
<niemeyer> hazmat: Sounds like a good plan, actually
<niemeyer> I'll get some food too
<koolhead17> kim0: will be back on this documentation in few hours need to learn some "expect" scripting
<kim0> koolhead17: yeah I'm going .. should meet Murthy
<koolhead17> that be great!!
<niemeyer> Ok, actually leaving for lunch now
<hazmat> interesting virtualenv seems to strip debugging symbols
<jimbaker> meeting in #ubuntu-cloud
<jimbaker> kim0, i assume we are kicking off now, right?
<kim0> yep
<niemeyer> hazmat: Do you have a moment for a chat?
<hazmat> niemeyer, sure
<niemeyer> Skype or mumble?
<jimbaker> bcsaller, do you want to join the weekly cloud meeting at #ubuntu-cloud ?
<jimbaker> just brought you up
<hazmat> niemeyer, skype
<niemeyer> hazmat: Ok
<niemeyer> hazmat: 
<niemeyer> +    def do_retry_start(self, fire_hooks=True):
<niemeyer> +        return self._invoke_lifecycle(
<niemeyer> +            self._lifecycle.start, fire_hooks=fire_hooks)
<hazmat> bcsaller, jimbaker standup?
<jimbaker> hazmat, i was just about to ask the same thing
<jimbaker> let me start up skype
<bcsaller> sure
<niemeyer> jimbaker: map["open"].append({"port": ...})
<niemeyer> jimbaker: ?
<niemeyer> jimbaker: 
<niemeyer> def expose_port(content, ...):
<niemeyer>     data = yaml.loads(content)
<niemeyer>     if not data:
<niemeyer>         data = {"open": []}
<niemeyer>     if port not in data["open"]:
<niemeyer>         data["open"].append(port)
<niemeyer>     return yaml.dumps(data)
<niemeyer> jimbaker: the beginning of that should handle empty content as well
<niemeyer> jimbaker: data = content and yaml.loads(content)
<niemeyer> jimbaker: That should handle it
<_mup_> ensemble/watching-godot r216 committed by kapil.thangavelu@canonical.com
<_mup_> reliable slow watch testing
<_mup_> ensemble/config-get r218 committed by bcsaller@gmail.com
<_mup_> docstring cleanups
<_mup_> ensemble/config-set r215 committed by bcsaller@gmail.com
<_mup_> resolve merge
<niemeyer> hazmat: I've added only that single item we discussed to the review of unit-agent-resolved
<niemeyer> hazmat: It turned out that all of the other issues I had (untested paths, docs missing) are likely going to be rendered irrelevant if you change that
<niemeyer> bcsaller: Is config-get up for review again, or did I forget to move it to WIP?
<bcsaller> gustavo: its up again, the changes should be pushed
<niemeyer> bcsaller: Cool, thanks
<_mup_> ensemble/unit-agent-resolved r270 committed by kapil.thangavelu@canonical.com
<_mup_> remove passing action transition/state variables
<niemeyer> "Your trip to Budapest, Hungary is about to begin"
<niemeyer> TripIt is slightly nervous about trips I see
<_mup_> ensemble/service-config-unit-lifecycle r217 committed by kapil.thangavelu@canonical.com
<_mup_> fix up todo comments for post resolved merge work
<hazmat> so we need a newer version of txaws then is in natty it appears..
<hazmat> oh.. nm
<niemeyer> bcsaller: There are still missing tests in config-get
<niemeyer> bcsaller: E.g.
<niemeyer> +    def get_formula_state(self):
<niemeyer> bcsaller: TDD would be very helpful in avoiding that kind of problem
<bcsaller> yeah, I wrote the tests for the higher levels and then filled in methods to make them pass, I must not have filled in the missing. In reviewing the patch pre-push I didn't even notice though
<niemeyer> bcsaller: That's not quite TDD
<niemeyer> bcsaller: TDD is fine-grained.. you have to write several public methods to make a single test pass, the test is probably not fine-grained enough
<bcsaller> good advise 
<niemeyer> bcsaller: Whenever you're writing something, and you figure you need support in another area of the application, that other area should *also* be done with TDD
<niemeyer> bcsaller: Then, before pushing it for review, it's generally good practice to be the reviewer of your own code
<niemeyer> bcsaller: Actively looking for leftovers, untested areas, etc
<niemeyer> bcsaller: +        config_state = yield self._setup_config_state()
<niemeyer> +        yield config_state.write()
<niemeyer> bcsaller: Untested as well
<_mup_> ensemble/trunk r217 committed by kapil.thangavelu@canonical.com
<_mup_> merge service-config-unit-lifecycle [r=niemeyer][f=776596]
<_mup_> unit lifecycle and workflow work to enable config-changed hooks.
<niemeyer> bcsaller: config-get reviewed.
<bcsaller> thanks, I'll look for those places and test them
<niemeyer> Quick break
<_mup_> ensemble/watching-godot r217 committed by kapil.thangavelu@canonical.com
<_mup_> revert changes to watch expose to yield to the first invocation, just fix the tests.
<koolhead17> kim0: ping
<_mup_> Bug #777421 was filed: slow watch tests are unreliable <Ensemble:New> < https://launchpad.net/bugs/777421 >
<hazmat> jimbaker, niemeyer a small branch fix for slow watcher trunk tests is available 
<niemeyer> hazmat: Super!
<hazmat> it was probably a trivial
<jimbaker>  hazmat, good to hear
<niemeyer> Time for a haircut
<_mup_> ensemble/unit-agent-resolved r271 committed by kapil.thangavelu@canonical.com
<_mup_> separate transitions for retry with hook
<jimbaker> niemeyer, now i recall more fully why we want this to be a YAMLState for open ports (and why it will take more time)
<jimbaker> the problem is that we want open-port/close-port to participate in the same flush as the overall HookContext
<jimbaker> this way we can ensure that hooks that open/close ports have all-or-nothing semantics corresponding to their exit status code
<_mup_> ensemble/unit-agent-resolved r272 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk resolve conflict.
<jimbaker> hazmat, funny, re bug 777421, i was curious about the whole poke_zk and whether that would be useful or not. not, i guess.
<_mup_> Bug #777421: slow watch tests are unreliable <Ensemble:In Progress by hazmat> < https://launchpad.net/bugs/777421 >
<_mup_> ensemble/trunk r218 committed by kapil.thangavelu@canonical.com
<_mup_> merge watching-godo [r=niemeyer][f=777421]
<_mup_> trivial fix to slow watch callback tests of the service unit api in order
<_mup_> to reliably run when looped.
<_mup_> ensemble/trunk-merge r190 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> ensemble/resolved-state-api r203 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk conflict
<_mup_> ensemble/unit-agent-resolved r273 committed by kapil.thangavelu@canonical.com
<_mup_> resolve merge conflict
<bcsaller> short break
<_mup_> ensemble/unit-agent-resolved r274 committed by kapil.thangavelu@canonical.com
<_mup_> expand out additional recovery transition actions
<_mup_> ensemble/expose-provisioning r249 committed by jim.baker@canonical.com
<_mup_> Do not observe private state directly in tests and fix bad yield
<_mup_> ensemble/expose-provisioning r250 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<jimbaker> hazmat, when is poke_zk still appropriate, given that it is still in the tests? certainly it looks innocuous
<hazmat> jimbaker, when a round trip to zk is all that's needed to process callbacks, if you have long  running background activity it won't be
<jimbaker> ok, so for ordinary watches, poke_zk is fine, just the specific slow watches cause problems here
<hazmat> ie. using principles of global ordering, we know that there shouldn't be any additional activity.. it might be useful to extend that to a more convoluted poke (watch/set/callback)
<hazmat> jimbaker, pretty much, but if the watch callback is doing additional zk interaction its dicey
<hazmat> if its a test contructed callback, its generally fine
<jimbaker> hazmat, ahh, that certainly limits applicability in my recent work, good way to think about it
<jimbaker> cool, applicable in only one case in my expose-provisioning branch, but that does help
<_mup_> ensemble/expose-provisioning r251 committed by jim.baker@canonical.com
<_mup_> Sleep instead of poke for slow watch on openeded ports test
<_mup_> ensemble/expose-provisioning r252 committed by jim.baker@canonical.com
<_mup_> Sleep adjustment fu
#ubuntu-ensemble 2011-05-05
<_mup_> ensemble/expose-provisioning r253 committed by jim.baker@canonical.com
<_mup_> Make explicit that a couple of methods are only to be used for testing
<_mup_> ensemble/unit-agent-resolved r275 committed by kapil.thangavelu@canonical.com
<_mup_> complete the change over to more transitions, fill out coverage
<_mup_> ensemble/unit-agent-resolved-part-two r274 committed by kapil.thangavelu@canonical.com
<_mup_> merge and resolve conflict
<_mup_> ensemble/unit-agent-resolved-part-two r274 committed by kapil.thangavelu@canonical.com
<_mup_> redo merge resolve conflict, grabbed the wrong resolved file
<niemeyer> Yo!
<niemeyer> hazmat: ping
<hazmat> niemeyer, pong
<niemeyer> hazmat: Yo
<niemeyer> hazmat: I'm having some trouble to review the new unit-agent-resolved, for lack of a good based to compare it with
<niemeyer> hazmat: ensemble-resolved, which is the pre-requisite, doesn't include the trunk yet
<niemeyer> hazmat: the trunk doesn't include ensemble-resolved
<niemeyer> hazmat: and unit-agent-resolved contains both
<hazmat> niemeyer, no.. it doesn't, unit-agent-resolved is after ensemble-resolved in my pipeline
<niemeyer> hazmat: So no matter what's the base, there are cross-changes
<niemeyer> hazmat: Yes it is, thus, it includes the changes from it
<hazmat> niemeyer, the base is ensemble-resolved
<niemeyer> hazmat: Yes it is
<hazmat> i need to push my latest which merges trunk
<hazmat> of ensemble-resolved
<hazmat> if you want to utilize it as a base
<niemeyer> hazmat: unit-agent-resolved contains both the latest from trunk and the latest from ensemble-resolved
<hazmat> niemeyer, yes
<niemeyer> hazmat: I can't compare it with either without getting irrelevant changes
<niemeyer> hazmat: (in the diff)
<hazmat> niemeyer, the diff to ensemble-resolved looked fine to me
<niemeyer> hazmat: The solution is to handle the ensemble-resolved merge on trunk, or to update it
<niemeyer> -html_static_path = ['_static']
<niemeyer> +#html_static_path = ['_static']
<niemeyer> hazmat: That's part of that diff
<niemeyer> hazmat: (and a lot of other things from trunk)
<hazmat> pushing the latest ensemble-resolved
<niemeyer> hazmat: That should handle it if you have trunk merged on it, thanks
<hazmat> niemeyer, let me know if that fixes the issue
<hazmat> its pushed
<hazmat> that's current with trunk
<niemeyer> hazmat: Yep, that did it, thank you
<hazmat> niemeyer, cool, np
<niemeyer> Phew, 2 to go
<niemeyer> Tomorrow, though
<_mup_> ensemble/config-set r216 committed by bcsaller@gmail.com
<_mup_> checkpoint
<_mup_> ensemble/auto-dependency-resolution r217 committed by kapil.thangavelu@canonical.com
<_mup_> flesh out some tests using a new test ease of use api.
<_mup_> ensemble/expose-provisioning r254 committed by jim.baker@canonical.com
<_mup_> Test added to verify opened port parsing
<kim0> morning folks
<_mup_> ensemble/config-get r219 committed by bcsaller@gmail.com
<_mup_> use get_formula_state in get_relation_endpoints, removes duplicate code
<_mup_> ensemble/config-get r220 committed by bcsaller@gmail.com
<_mup_> rename hook's local_* methods as get_local_*
<_mup_> expanded test coverage
<_mup_> ensemble/config-get r221 committed by bcsaller@gmail.com
<_mup_> merge trunk
<_mup_> ensemble/trunk r219 committed by bcsaller@gmail.com
<_mup_> Merge config-get [r=niemeyer] [f=776605]
<_mup_> Merge support for config-get cli tool for use in hooks. This makes local service options available within hooks.
<kim0> config-get lovely :)
<_mup_> Bug #777816 was filed: Ensemble should resolve dependencies upon deploy <Ensemble:In Progress by hazmat> < https://launchpad.net/bugs/777816 >
<niemeyer> hazmat: Morning!
<hazmat> niemeyer, morning
<kim0> niemeyer: hazmat morning 
<hazmat> kim0, top of the morning
<kim0> hehe
<hazmat> morning always seems a bit arbitrary given the our globally distributed nature
<kim0> Yeah .. maybe we should get a bot for translating timezone salutations
<hazmat> from a symbolism perspective, morning is nice though, expressing at hope and joy at the beginning of a new day
<hazmat> hmm.. puppet's changing licensing to apache
<kim0> so how do I have to bribe to merge my doc branches :)
<kim0> s/how/who/
<hazmat> kim0, i'll do it now
<hazmat> niemeyer, do you know if the bzr --author flag on commit takes  a full name or an lp name?
<niemeyer> hazmat: I suppose it takes a full name
<niemeyer> hazmat: Or, actually
<niemeyer> hazmat: I think this is just metadata
<hazmat> niemeyer, it is just metadata, but its displayed onlp
<hazmat> niemeyer, the gopher ppa has the latest go packages?
<niemeyer> hazmat: Displayed as is, I think
<niemeyer> hazmat: It does
<niemeyer> hazmat: ppa:gophers/go
<hazmat> niemeyer, thanks
<niemeyer> hazmat: np
<niemeyer> Actually, I have to send those packages again.. some third-party libraries got included by mistake
<_mup_> ensemble/trunk r220 committed by kapil.thangavelu@canonical.com
<_mup_> merge kim0's new user guide and faq [a=kim0][r=niemeyer][f=734975]
<_mup_> Provides a tutorial within the documentation taking the user through
<_mup_> the process of getting started with Ensemble as well as an FAQ.
<hazmat> kim0, done, and your noted as the author on lp and bzr history
<hazmat> kim0, thanks again, that was much needed
<_mup_> ensemble/auto-dependency-resolution r218 committed by kapil.thangavelu@canonical.com
<_mup_> additional test for solving deps from both environment and formula repos.
<hazmat> doing some bug milestone management, sorry about the noise
<_mup_> ensemble/resolved-state-api r204 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<hazmat> niemeyer, should i change the relation resolved api now (it doesn't need the retry flag).. or just wait, since the entire stack uses the api..
<kim0> hazmat: thanks man
<hazmat> kim0, np
<niemeyer> hazmat: Sorry, ECONTEXT
<niemeyer> hazmat: What's that about again?
<hazmat> niemeyer, sure.. the set_relation_resolved(relation_map) really just needs to be set_relation_resolved(relation_list) .. since retry hook has no meaning 
<hazmat> hmm.. i think i suggested that i just make the cli barf in this case
<hazmat> i guess that api has other pending cleanups as well though
<niemeyer> hazmat: Do we plan to make it retry in the future?
<hazmat> niemeyer, hmm.. good question
<hazmat> niemeyer, its concievable.. but not nesc. correct
<hazmat> a hook execution may be obsolete in a relation context 
<hazmat> by the time we want to retry it
<hazmat> i go ahead cli fix, and note is as something that needs adjustment as part of the staterequest protocol work
<hazmat> ^it
<hazmat> hmm.
<hazmat> i guess i can try it and see how much work it really is
<hazmat> niemeyer, thoughts on retry?
<hazmat> uds sessions are posted
<koolhead17> hi all
<kim0> hey
<koolhead17> kim0: am at https://code.launchpad.net/~kim0/ensemble/user-tutorial-and-FAQ/+merge/58861  will buzz you later :)
<kim0> koolhead17: it's been merged .. http://people.canonical.com/~niemeyer/ensemble/user-tutorial.html
<koolhead17> ok :)
<niemeyer> hazmat: Sorry, was answering that message in the cloud ML
<niemeyer> hazmat: Hmm
<niemeyer> hazmat: So, let's try to come up with a scenario
<niemeyer> hazmat: DNS goes off for whatever reason
<niemeyer> hazmat: db-relation-changed explodes
<niemeyer> hazmat: Intended change done by the other side of the relation isn't acted upon
<niemeyer> hazmat: No other changes are done
<niemeyer> hazmat: DNS goes live
<niemeyer> hazmat: ensemble resolved unit/0 db
<niemeyer> hazmat: Change is lost
<hazmat> niemeyer, yes
<hazmat> niemeyer, we could capture enough information to retry the hook
<hazmat> but then say unit/0 departs
<hazmat> should we be retrying a hook for something which doesn't exist anymore?
<niemeyer> hazmat: How would that take place?
<niemeyer> hazmat: resolved shouldn't work on a relation which doesn't exist (!?)
<hazmat> niemeyer, the relation exists but the remote peer that caused the change has departed
<hazmat> s/peer/unit
<niemeyer> hazmat: Hmm
<niemeyer> hazmat: Thinking
<hazmat> some of this might be more clear if we do on disk persistence and recovery, since the merge of last known to current is applicable
<niemeyer> hazmat: Hmm.. maybe
<hazmat> niemeyer, i'll leave the api as is for now
<niemeyer> hazmat: Sounds good
<niemeyer> hazmat: Can you please [trivial] a comment explaining that debate please?
<niemeyer> hazmat: Just enough to give us context for following on that conversation at a latter time
<hazmat> niemeyer, sure in the commit, merge proposal, bug report, code ? 
<niemeyer> hazmat: Code feels more visible.. next to the API which we should change
<hazmat> niemeyer, sounds good
<niemeyer> hazmat: Thanks
<_mup_> ensemble/expose-provisioning r255 committed by jim.baker@canonical.com
<_mup_> Merged trunk & resolved conflicts
<_mup_> ensemble/expose-dummy-provider r217 committed by jim.baker@canonical.com
<_mup_> Merged trunk
 * hazmat lunches
<_mup_> ensemble/expose-dummy-provider r218 committed by jim.baker@canonical.com
<_mup_> Use fail/succeed instead of inlineCallbacks for dummy methods w/o internal yield needs
<niemeyer> hazmat: Enjoy
<_mup_> ensemble/expose-provisioning r256 committed by jim.baker@canonical.com
<_mup_> Merged upstream
<niemeyer> One branch pending
<niemeyer> jimbaker: Will review your lunch after lunch
<niemeyer> Erm
<niemeyer> jimbaker: Will review your branch after lunch
<niemeyer> "Will review your lunch after branch" would be nice too
<jimbaker> niemeyer, i believe i'm to have some sort of chicken soup to deal with my cold symptoms... i will see what my mother-in-law prepares ;)
<niemeyer> jimbaker: ;-)
<_mup_> ensemble/trunk r221 committed by jim.baker@canonical.com
<_mup_> merge expose-dummy-provider [r=niemeyer][f=766241]
<_mup_> Adds dummy provider support for port opening/closing.
<_mup_> ensemble/expose-provisioning r257 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/expose-hook-commands r249 committed by jim.baker@canonical.com
<_mup_> CLI commands
<_mup_> ensemble/expose-hook-commands r250 committed by jim.baker@canonical.com
<_mup_> Merged upstream & resolved conflicts
<_mup_> ensemble/resolved-state-api r205 committed by kapil.thangavelu@canonical.com
<_mup_> document set_relation_resolved api todo discussion
<_mup_> ensemble/resolved-state-api r206 committed by kapil.thangavelu@canonical.com
<_mup_> doc string cleanups per review.
<_mup_> ensemble/trunk r222 committed by kapil.thangavelu@canonical.com
<_mup_> merge resolved-state-api [r=niemeyer][f=767762]
<_mup_> Provides a get/set/clear/watch state api for marking unit errors as resolved.
<_mup_> ensemble/trunk-merge r191 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> ensemble/ensemble-resolved r259 committed by kapil.thangavelu@canonical.com
<_mup_> update code comments per review comment
<_mup_> ensemble/trunk r223 committed by kapil.thangavelu@canonical.com
<_mup_> merge ensemble-resolved [r=niemeyer][f=767948]
<_mup_> Provides a new ensemble resolved subcommand for marking unit errors
<_mup_> as resolved.
<niemeyer> Alright, last branch in review!
<hazmat> niemeyer, i've got some comments on the unit-agent-resolved review that i would like to discuss
<hazmat> already attached to the review
<niemeyer> hazmat: Ok.. I'll need some time to go over this review I'm doing
<_mup_> ensemble/unit-agent-resolved r279 committed by kapil.thangavelu@canonical.com
<_mup_> address review comments re formatting
<niemeyer> jimbaker: I really wish this branch was pushed in a more fine grained way
<niemeyer> I'm not sure I'll be able to get through it today
<jimbaker> niemeyer, there is an obvious split between the service changes and the specific support for provisioning
<jimbaker> although the service changes are all very much boiler plate cut & paste
<jimbaker> however, if you want me to split the branch in that way, i can certainly do so
<niemeyer> jimbaker: Yeah, hundreds of lines of boilerplate ;)
<niemeyer> jimbaker: That'd take even more time
<jimbaker> niemeyer, i don't like any of it, but hopefully post budapest it can be removed
<niemeyer> jimbaker: The idea is to not get into this state
<niemeyer> jimbaker: As you said, there are several split points..
<niemeyer> jimbaker: Pushing such a huge branch 3 days before UDS isn't good
<hazmat>  niemeyer, jimbaker, bcsaller test api https://pastebin.canonical.com/47175/
<hazmat> i'm implementing parts as part of the dependency resolver work
<hazmat> most of it is cribbed from other test helpers we have around
<_mup_> ensemble/trunk r224 committed by kapil.thangavelu@canonical.com
<_mup_> merge unit-agent-resolved [r=niemeyer][f=767961]
<_mup_> Provides a unit lifecycle support for processing resolved unit relations.
 * niemeyer => break
<_mup_> ensemble/unit-agent-resolved-part-two r279 committed by kapil.thangavelu@canonical.com
<_mup_> update unit agent to use correct transition depending on retry setting.
<_mup_> ensemble/trunk r225 committed by kapil.thangavelu@canonical.com
<_mup_> merge unit-agent-resolved-part-two [r=niemeyer][f=776014]
<_mup_> Provides unit agent support for transitioning resolved units and unit relations.
<_mup_> ensemble/debug-hook-scope-guard r211 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> Bug #778134 was filed: debug hook seems to have lost hook cli context when using hook api <Ensemble:New> < https://launchpad.net/bugs/778134 >
<hazmat> we really do need to have ensemble changed in the environment
<hazmat> or use byobu.. its hard to tell what the debug is actively working on otherwise
<hazmat> sigh.. we need more release testing, or feature freezes before a release. so many new features this release, i'm seeing a few behavior regressions
<jimbaker> hazmat, agreed - i haven't fired up AWS recently since i'm working on stuff that just needs unit testing, *for now*
<jimbaker> but experience has shown repeatedly that we don't anticipate enough in our unit tests
<_mup_> ensemble/debug-hook-scope-guard r212 committed by kapil.thangavelu@canonical.com
<_mup_> switch to using byobu
 * niemeyer_break waves
<_mup_> ensemble/trunk r226 committed by kapil.thangavelu@canonical.com
<_mup_> merge debug-hook-scope-guard [r=niemeyer][f=776014]
<_mup_> Enables live hook debugging via screen/byobu, with better start/end condition handling.
<_mup_> ensemble/expose-hook-commands r251 committed by jim.baker@canonical.com
<_mup_> Hook command skeleton, plus parse_port_protocol for that arg
 * niemeyer => dinner
#ubuntu-ensemble 2011-05-06
<_mup_> ensemble/config-set r218 committed by bcsaller@gmail.com
<_mup_> options validation
<_mup_> ensemble/expose-provisioning r258 committed by jim.baker@canonical.com
<_mup_> Partial fix to remove more state in the provisioning agent in favor of just reading it from ZK
<_mup_> ensemble/config-set r219 committed by bcsaller@gmail.com
<_mup_> support optional config files
<_mup_> ensemble/expose-provisioning r259 committed by jim.baker@canonical.com
<_mup_> Handle scenario where services or service units disappear from topology while applying port changes
<hazmat> jimbaker, i'm working on cleaning up the watches on trunk now, fwiw, the best guideline rule for poke_zk seems to be if you know a watch is being established in the background
<jimbaker> hazmat, i was just playing with that
<jimbaker> it seemed like poke_zk + sleep was effective
<_mup_> ensemble/expose-provisioning r260 committed by jim.baker@canonical.com
<_mup_> Do not keep port policy in memory, read it from ZK every time
<hazmat> jimbaker, basically you can use one poke per other zk activity you know is happening in the background
<jimbaker> hazmat, makes sense
<hazmat> ie,  a one per client get/get_children, or exists, etc.
<_mup_> ensemble/config-set r220 committed by bcsaller@gmail.com
<_mup_> add get_serialization_data
<_mup_> ensemble/watching-godot-redux r227 committed by kapil.thangavelu@canonical.com
<_mup_> all the watch state protocol apis now wait on their initial callback before returning.
<_mup_> ensemble/watching-godot-redux r228 committed by kapil.thangavelu@canonical.com
<_mup_> apply the wait on initial invocation behavior to the debug log watch
<_mup_> ensemble/watching-godot-redux r229 committed by kapil.thangavelu@canonical.com
<_mup_> make lifecycle start fire after initial watch callbacks
<_mup_> ensemble/watching-godot-redux r230 committed by kapil.thangavelu@canonical.com
<_mup_> excise any magic waiting for lifecycle background activity
<_mup_> ensemble/config-set r221 committed by bcsaller@gmail.com
<_mup_> ConfigOptions available off formula directory/bundle
<_mup_> ensemble/watching-godot-redux r231 committed by kapil.thangavelu@canonical.com
<_mup_> excise extraneous sleeps from lifecycle tests.
<_mup_> ensemble/watching-godot-redux r232 committed by kapil.thangavelu@canonical.com
<_mup_> fix the test to account for valid sequence variation across multiple join events/
<_mup_> ensemble/watching-godot-redux r233 committed by kapil.thangavelu@canonical.com
<_mup_> excise the last sleep from the workflow tests
<_mup_> ensemble/watching-godot-redux r234 committed by kapil.thangavelu@canonical.com
<_mup_> excise last extraneous sleep from lifecycle tests.
<_mup_> ensemble/config-set r222 committed by bcsaller@gmail.com
<_mup_> ConfigOptions.load can also take a dict as source data allowing easier initialization in FormulaState
<_mup_> ensemble/config-set r223 committed by bcsaller@gmail.com
<_mup_> persistent watch on config state
<_mup_> ensemble/config-set r224 committed by bcsaller@gmail.com
<_mup_> typo
<_mup_> ensemble/trunk-2 r222 committed by bcsaller@gmail.com
<_mup_> merge trunk
<hazmat> g'morning
<niemeyer> hazmat: yo!
<hazmat> niemeyer, fixed most of the watches last night.. feels good to  remove like a 50 sleeps and artificial stops
<niemeyer> hazmat: Wow, seriously?
<hazmat> niemeyer, indeed
<hazmat> its slow going though.. have to run about a hundred iterations per test to verify
<niemeyer> hazmat: So most of that crap was actually bound to the initial invocation derailing the deferred chain, rather than real background activity?
<hazmat> niemeyer, yup
<niemeyer> Aw
<hazmat> niemeyer, some are for legitimate background activity
<hazmat> like the watch firing on state change
<niemeyer> Right
<hazmat> hmm
<hazmat> niemeyer, there's an interesting new sequencing behavior with the new hook semantics.. re ordering.. currently we don't allow upgrades unless we're in a running state, if an upgrade flag is set when the unit is not running, we'll end up ignoring it, it looks like we need to check it again after we're in the running state.
<hazmat> not really about sequencing, just that made it more apparent
<niemeyer> hazmat: Ah, oops
<niemeyer> hazmat: But won't the upgrade fail?
<hazmat> niemeyer, the upgrade won't happen it exits early.. but even though we might then have an upgrade flag set, we won't have another watch change triggering the upgrade behavior again when we're in the started state
<hazmat> ie. the unit would eventually be running with an upgrade flag set
<hazmat> currently we clear the flag if we're not in the running state
<niemeyer> hazmat: I don't think I get it still.. if we disallow the upgrade flag from being set when the unit is not running, how can the upgrade flag be set when the unit is not running?
<hazmat> but its not clear that's the right behavior
<hazmat> niemeyer, good point.. i was just considering from the agent side, where really it should be prepared for the unit to be in any state
<niemeyer> hazmat: Admittedly, there's a race condition there, though
<niemeyer> hazmat: Between checking the state and setting the flag
<hazmat> niemeyer, and the flag being acted upon by the agent
<hazmat> which is why the agent checks separately
<hazmat> i'll go back to clearing it, its probably the only sane thing to do, and the user can request an upgrade again
<hazmat> in future with protocol abstractions  we could introduce other things to enable the flag persisting tiill a running state
<niemeyer> hazmat: Hmm.. I don't think there's a race about that
<hazmat> niemeyer,upgrade flag is set, unrelated hook fails, unit needs resolved
<niemeyer> hazmat: Or maybe I don't see what you mean
<niemeyer> hazmat: Ah, ok.. yes
<niemeyer> Will get some lunch.. biab
<_mup_> ensemble/watching-godot-redux r235 committed by kapil.thangavelu@canonical.com
<_mup_> process upgrade in the correct sequence during unit lifecycle startup
<_mup_> ensemble/watching-godot-redux r236 committed by kapil.thangavelu@canonical.com
<_mup_> cleanup pep8 violations from trunk
<_mup_> ensemble/watching-godot-redux r237 committed by kapil.thangavelu@canonical.com
<_mup_> reintroduce some sleeps that where part of error checking (giving a chance for an error to occur).
<hazmat> hmm.. changing the topology watches breaks a lot of test assumptions
<_mup_> ensemble/watching-godot-redux r238 committed by kapil.thangavelu@canonical.com
<_mup_> update machine manager watch machine states to wait till current state is processed.
<_mup_> ensemble/watching-godot-redux r239 committed by kapil.thangavelu@canonical.com
<_mup_> test confirming environment watch processes initial environment as part of watch creation method.
<niemeyer> hazmat: You mean with the fixes for them to not go off alone?
<hazmat> niemeyer, econtext
<niemeyer> hazmat: Re. your previous message above
<hazmat> niemeyer, i'm also implementing this for the topology watches, some of them are already structured so they require a change
<hazmat> before they fire the callback
<hazmat> i'm just writing tests that in verify in the same way as the other watches, the current state has been processed before the watch method is over.
<hazmat> oh.. i was wrong about breaking assumptions
<hazmat> their fine.. i had an unrelated issue
<niemeyer> Ah, ok
<_mup_> ensemble/watching-godot-redux r240 committed by kapil.thangavelu@canonical.com
<_mup_> fix pep8 violations from trunk
<_mup_> ensemble/watching-godot-redux r241 committed by kapil.thangavelu@canonical.com
<_mup_> test current state processing test for machine.watch_service_units
<_mup_> ensemble/watching-godot-redux r242 committed by kapil.thangavelu@canonical.com
<_mup_> test verifying current state processing of service.watch_relation_states
<niemeyer> "preferred_email_address_link":"https://api.launchpad.net/1.0/~user/+email/the@email.net"
<niemeyer> Interesting way to present emails in the API
<niemeyer> Object: <canonical.launchpad.systemhomes.WebServiceApplication object at 0x7adc110>, name: u'+me'
<niemeyer> Interesting data to return in the body of an API request too :)
<_mup_> ensemble/watching-godot-redux r243 committed by kapil.thangavelu@canonical.com
<_mup_> test for watch upgrade proceses current state
<jimbaker> hazmat, so for the provisioning agent test, i added superpoke, which by default, pokes zk 10 times, sleeps 0.1s, pokes zk 10 more times
<hazmat> jimbaker, ugh.
<jimbaker> there's probably a more minimal version as you illustrated w/ your godot-redux
<jimbaker> but it works well now - i'm getting a reliable looped test at last
<hazmat> jimbaker, in the current godot-redux every sleep is documented with why its there.
<hazmat> jimbaker, cool
<jimbaker> hazmat, it certainly sounds like the right way to do it, although it's not clear to me how this can be done w/o being whitebox about knowing what the watch setup is, which could be quite complex in the case i have
<jimbaker> since the watches are definitely timing dependent
<jimbaker> to know more, i have to look at the logs more closely to see what ordering they have
<hazmat> jimbaker, i'll have to take a look
<hazmat> i'll search for the superpokes
<hazmat> but not this week
<hazmat> it will be nice to have the existing watches and tests on trunk fixed
<jimbaker> hazmat, yeah, i think it's ok to start with superpoke, then make it better
<hazmat> there's alot of watch apis though
<jimbaker> the other thing i needed to do was to stop InternalTopologyErrors from leaking through
<hazmat> jimbaker, indeed, you have a good understanding now though of what the sleeps are for?
<jimbaker> right now, i'm doing that in the provisioning agent itself, but it looks like they are confined in two methods, so we can push back into the topology itself
<hazmat> hmmm.. internaltopologyerrors? shouldn't really ever be seen
<jimbaker> hazmat, i think i have a good working idea
<hazmat> jimbaker, cool
<jimbaker> seen here in http://pastebin.ubuntu.com/604149/, note the "badness for wordpress" log line (obviously i don't expect it to be there for any length of time)
<jimbaker> hazmat, that probably occurs about 20% of the time
<jimbaker> much rarer (i have now run this looped 140+ times) is ServiceUnitState.get_assigned_machine_id surfacing InternalTopologyError
<_mup_> ensemble/watching-godot-redux r244 committed by kapil.thangavelu@canonical.com
<_mup_> current state processing tests for watch debug hook, watch resolved, and watch relation resolved.
<_mup_> ensemble/expose-provisioning r261 committed by jim.baker@canonical.com
<_mup_> Use 'superpoke' in port provisioning tests, which now loop 200+ times
<_mup_> ensemble/watching-godot-redux r245 committed by kapil.thangavelu@canonical.com
<_mup_> unit agent test resolved while stoppped, switch out the sleep for a poke, change the wait condition to a state change from a hook execution
<niemeyer> <jimbaker> hazmat, so for the provisioning agent test, i added superpoke, which by default, pokes zk 10 times, sleeps 0.1s, pokes zk 10 more times
<niemeyer> Double ugh :)
<niemeyer> ... and it only works if you jump three times on the left foot, and two times on the right one, in that order!
<_mup_> ensemble/watching-godot-redux r246 committed by kapil.thangavelu@canonical.com
<_mup_> cleanup the unit agent resolved watch tests.
<jimbaker> niemeyer, i share your dislike... the name superpoke only helps suggest that it is not desirable
<niemeyer> jimbaker: It's not just a dislike.. we can't have something like this.
<jimbaker> niemeyer, exactly
<jimbaker> so we need to fix it
<jimbaker> one step at a time... possibly with a jump three times, i guess
<niemeyer> jimbaker: Yeah.. let's start by removing it :-)
<jimbaker> niemeyer, don't worry, i won't suggest a proposal until it's gone
<hazmat> jimbaker, when your sleeping that long, its generally means you need some sort of observation api so you can sync on a known point
<niemeyer> jimbaker: It makes as much sense to be repeatedly poking and sleeping as it makes sense to press an elevator button repeatedly
<hazmat> jimbaker, like wait_on_state or wait_on_hook
<jimbaker> hazmat, niemeyer agreed, agreed :)
<jimbaker> niemeyer, the more interesting thing is that code only looks at ZK for its state
<jimbaker> in doing so, i have apparently found at least two bugs in our state API
<niemeyer> jimbaker: Please submit fixes for these bugs in an isolated branch
<jimbaker> niemeyer, that's my intent :)
<niemeyer> jimbaker: Thanks
<_mup_> Bug #778628 was filed: Watches should return only after processing current state <Ensemble:New for hazmat> < https://launchpad.net/bugs/778628 >
<jimbaker> i will also need an added method. right now, it's part of ServiceStateManager in my branch, get_service_units_for_machine(self, machine_state)
<jimbaker> i put it there so as to avoid circular import dependencies, and there seems to be some precedence for doing that in ensemble.state.service
<hazmat> niemeyer, watch fix branch in review now fwiw
<niemeyer> jimbaker: It should be machine_state.get_service_unit_states()
<jimbaker> if that makes sense, i will also suggest a separate branch for that too
<jimbaker> niemeyer, that would be ideal
<niemeyer> jimbaker: So let's do it :-0
<niemeyer> :-)
<jimbaker> niemeyer, so what do we do to avoid circular imports?
<niemeyer> jimbaker: Hmm.. the usual thing we do with Python
<jimbaker> then i can realize that ideal
<hazmat> that's odd lp says the diff is 1300 lines..
<hazmat> lp is confused
<jimbaker> niemeyer, please elaborate your thought here, because python gives us some tools, but not a clear path
<niemeyer> jimbaker: How do you usually handle circular imports with Python?
<jimbaker> niemeyer, i probably would defer the import
<niemeyer> jimbaker: Bingo
<niemeyer> jimbaker: Simply import it within the function
<niemeyer> s/function/method
<hazmat> there are a few places that do deferred imports in appropriate function in the code base.. jim i'm curious which example you though was precedence of moving the function?
<jimbaker> hazmat, you ask me on the spot
<jimbaker> there does seem to be somewhat more code in ensemble.state.service that knows about our types of state
<hazmat> jimbaker, no worries, i've got some extra time now, if you could point me to one of branches using superpoke.
<jimbaker> hazmat, sure you can see the current ugliness (don't let niemeyer look at, he will gag - lp:~jimbaker/ensemble/expose-provisioning)
<hazmat> jimbaker, ;-) 
<jimbaker> i on the other hand celebrate a modicum of progress
<hazmat> jimbaker, this needs an observer api on the agent, probably best to pull it from the agent into a separate object (proto protocol, ala formula upgrade)
<hazmat> jimbaker, we need to be able to know as an observer when the ports have changed
<hazmat> there's half-dozen sleep calls all intermixed in these tests, all waiting for background watch activity
<hazmat> there's nothing wrong with them except the need for better sync points
<hazmat> jimbaker, more concretely adding  a set_port_observer,  with a callback that's invoked when the ports are modified would allow for solving this
<hazmat> if you ever find yourself using alot of sleeps, this is the solution
<niemeyer> Dude!  This thing is starting to work for real
<niemeyer> Sorry.. this thing == launchpad integration, for those watching ;-)
<hazmat> niemeyer, can you sniff all the emails of launchpad members yet?
<niemeyer> hazmat: Yep, almost there!
<niemeyer> hazmat: Hmmm.. it should actually work already
<niemeyer> hazmat: Will just implement some handy methods and paste an example
<hazmat> jimbaker, out of curiosity do you use emacs?
<jimbaker>  hazmat, yes i do emacs
<jimbaker> but more and more naively every year
<hazmat> jimbaker, me too re naively ;-),  but flymake mode with pep8 integration is da bomb
<jimbaker> i used to know my way around emacs lisp, then just dot files, for customization
<jimbaker> now i just do the out-of-the-box experience
<hazmat> jimbaker, so do my comments above make sense as way out of superpoke ?
<jimbaker> hazmat, sorry, i just saw that line...
<jimbaker> hazmat, that makes perfect sense as an integration point
<jimbaker> so basically one place to sync on
<niemeyer> hazmat: Yeah, I'm happy you've shown me how much flymake helps
<niemeyer> hazmat: Probably one of the best productivity gains I got with Python in the last year
<jimbaker> hazmat, the good thing about budapest is i will get the live demo - i know you pointed me to a dot file before for this setup, but i managed to lose the link along the way
<hazmat> http://kapilt.com/files/emacs.tgz
<hazmat> ;-)
<hazmat> sounds good
<jimbaker> before i started comprehensively logging irc, using bouncers, etc  
<jimbaker> hazmat, thanks, i will give it a whirl
<hazmat> setting up the irc proxy was probably one of my best gains of the last year
<jimbaker> hazmat, so i think we need to set up an observer api like you suggest on each of the state variables that is being monitored
<jimbaker> eg, watched_services
<jimbaker> because some of the tests don't actually look at the ports that end up getting setup, they just ensure the watch cascade happens properly
<jimbaker> hazmat, re irc proxies, yeah i saw my fellow canonical employees doing some cool stuff
<jimbaker> actually one goal for me at budapest is to learn effective byobu
<_mup_> ensemble/config-set r226 committed by bcsaller@gmail.com
<_mup_> fix ensemble set validation with tests
<_mup_> additional testing
<_mup_> ensemble/config-set r227 committed by bcsaller@gmail.com
<_mup_> test for missing/bad service in ensemble set
<_mup_> ensemble/config-set r228 committed by bcsaller@gmail.com
<_mup_> checkin missing config.yaml
<_mup_> Bug #778685 was filed: ensemble must provide a `set` command for config options <Ensemble:New for bcsaller> < https://launchpad.net/bugs/778685 >
<niemeyer> We have to sit down at UDS and talk through some of the repository stuff
<jimbaker> cool, my laptop fits in its new protective neoprene sleeve in my bike's pannier bag
<niemeyer> jimbaker: You're brave :-)
<hazmat> niemeyer, sounds good, we also need to discuss monitoring
<_mup_> ensemble/auto-dependency-resolution r219 committed by kapil.thangavelu@canonical.com
<_mup_> new test-api base class, and resolver tests based on.
<niemeyer> Hmmm
<niemeyer> Food needed
<_mup_> ensemble/auto-dependency-resolution r220 committed by kapil.thangavelu@canonical.com
<_mup_> resolver works to resolve dependencies from formula repos and environment.
<hazmat> it lives
<hazmat> niemeyer, to answer the question from yesterday.. yes it does work ;-)
<hazmat> the weather in budapest looks great
#ubuntu-ensemble 2011-05-07
<niemeyer> hazmat: Cool :-)
<niemeyer> hazmat: Certainly good to know :)
<hazmat> i'm out to pick up a new pair of glasses, see you guys in budapest, safe travels
<hazmat> i'll be back in a bit
<niemeyer> hazmat: Thanks, you too!
<niemeyer> I'll grab dinner for real now
<_mup_> ensemble/auto-dependency-resolution r221 committed by kapil.thangavelu@canonical.com
<_mup_> more dep resolution tests
