#ubuntu-ensemble 2011-07-25
<SpamapS> hazmat: I'm not certain that the error I was getting was wrong.. need to fiddle with it more.
<kim0> Morning everyone o/
<hazmat> kim0, g'morning
<kim0> hazmat: morning :)
<niemeyer> Good morning!
<hazmat> niemeyer, g'morning
<kim0> niemeyer: Morning
<niemeyer> How're things going there?
<kim0> there where :) I'm back home
<fwereade> hey all
<fwereade> any particular reason we're not using FileStorage in LoadState/SaveState for EC2?
<niemeyer> fwereade: Hmm, not sure
<hazmat> still trying to come to grips getting setup my new laptop
<fwereade> it just seems a bit odd
<hazmat> fwereade, we're not?
<hazmat> fwereade, oh..  i guess it predated filestorage
<fwereade> LoadState and SaveState would be completely generic if they just used provider.get_file_strage for everything
<fwereade> any objections to making it so?
<hazmat> fwereade, sounds good
<fwereade> hazmat: cool, cheers
<fwereade> on a related note
<fwereade> actually forget I said anything, thoughts need marshalling a mo
<fwereade> ok
<fwereade> FileStorage interface asymetry
<fwereade> get returns a file handle
<fwereade> put takes a local file path
<fwereade> I favour making these consistent
<fwereade> is there some important feature of this interface that I'm missing?
<fwereade> (seems a good time to tidy that up, since a new FileStorage class is on its way)
<hazmat> fwereade, the get returns a file path because in some cases its backed by a temp file
<hazmat> fwereade, if we returned the temp path, the clean up become ambigious
<hazmat> for some remote file storages (ec2 is the only extant atm) we spool the file local to a temp file
<fwereade> hazmat: I tend to favour using file handles over paths anyway
<fwereade> hazmat: so, let me try again: is there any reason not to pass a file handle to put, rather than a local file path?
<fwereade> hazmat: if it's just an accident of convenience, I'd rather make it consistent now rather than entrenching the accident by writing a second confirming implementation ;)
<fwereade> er, conforming ^^^
<hazmat> fwereade, re put taking a file handle,  that sounds good
<niemeyer> fwereade: There are reasons for the current interface, yes.. give me a moment and I'll be with you
<hazmat> fwereade, so remote_path, local_file ?
<hazmat> as args
<fwereade> hazmat: yep
<fwereade> hazmat: to go with local_file (*) remote_path
<fwereade> hazmat: as it were
<fwereade> niemeyer: cool
<hazmat> also as per mail on list, the kanban view is still borked, reviewers should look at https://code.launchpad.net/ensemble/+activereviews 
<niemeyer> fwereade: Ok, so..
<niemeyer> fwereade: It's not a big deal in the case of put, but note that the interface is symetric
<niemeyer> fwereade: get() results in a path, put() provides a path
<niemeyer> fwereade: Or, takes a path
<niemeyer> fwereade: There's no greater reason for the latter, besides symmetry
<niemeyer> fwereade: For the former, it's actually more convenient to have it this way, so that we don't have to worry about the file size nor deal with full buffers and whatnot
<fwereade> niemeyer: well, that's what's documented, but the actual code in ec2's FileStorage appears to return a file handle
<fwereade> niemeyer: http://paste.ubuntu.com/651774/
<niemeyer> fwereade: Indeed, and that's pretty weird
<niemeyer> fwereade: and it's not just a file handle..
<fwereade> niemeyer: if the docs are the SPOT, I'll fix that as I go
<niemeyer> fwereade: It's a hack that works around the issue I just pointed out
<fwereade> niemeyer: I have a preference for things I can read/write in interfaces, as opposed to paths, but it's not a big deal
<niemeyer> fwereade: fh is a file object
<niemeyer> fwereade: Well, if you want to go over the trouble of converting it, it's not such a big deal to me, as long as you maintain the properties just mentioned
<fwereade> niemeyer: quite so, sorry poor terminology
<fwereade> niemeyer: I guess the other possibility is just to get/put the actual content
<niemeyer> fwereade: Yes, that's one of the explicit things we want to _avoid_ there
<fwereade> niemeyer: but I guess at times that will be unhelpfully largem so scratch that
<fwereade> niemeyer: cool
<fwereade> niemeyer: I'll go with local paths throughout then
<niemeyer> fwereade: If you want to pass a file object in and out, that's fine to me
<fwereade> niemeyer: it's a smaller change and it matches the docs ;)
<niemeyer> fwereade: There's a disadvantage to it, though
<niemeyer> fwereade: Which is likely why whoever wrote this logic ended up choosing this path
<niemeyer> fwereade: closing the NamedTemporary will remove the file
<fwereade> niemeyer: ...heh
<fwereade> niemeyer: in that case, file objects both ways would seem to me to be the ideal
<niemeyer> fwereade: Ok.. sounds good.. please push this change independently from the rest
<fwereade> niemeyer: yep, np
<fwereade> niemeyer: I thought of that myself this time :)
<niemeyer> fwereade: Aha, conditioning! ;-D
<fwereade> niemeyer: don't stop reminding me though, I don't think it's ingrained yet ;)
<niemeyer> hazmat: Do you know what's the reason why the kanban is breaking?
<hazmat> niemeyer, i was just playing around with that
<hazmat> niemeyer, i originally suspected it had something to do with the lp oops on some of my branches in review
<hazmat> niemeyer, but i'm not so sure anymore, i get unauthorized errors trying to use kanban atm against ensemble (fresh kanban checkout and oauth)
<niemeyer> hazmat: Ok, if you're already investigating it, just let me know what conclusion you get to please
<hazmat> niemeyer, is there something more to setting up kanban than doing the lp login.. i get errors anytime i try to use it (http unauthorized, Unknown consumer in body)
<niemeyer> hazmat: Not in general, IIRC
<_mup_> ensemble/robust-hook-exit r284 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<daker> kim0, can you test this https://code.launchpad.net/~daker/ensemble/small-fix ?
<daker> https://code.launchpad.net/~daker/ensemble/small-fix/+merge/68038
<kim0> daker: I'm preparing my session in an hour .. I can test it afterwards
<daker> ty
<hazmat> niemeyer, it looks like kanban is using a deprecated api, i tried updating the api usage, but ended up just going with anonymous login which fixed the problem with the generation (reports work fine). its a one line patch to kanban
<hazmat> niemeyer, http://pastebin.ubuntu.com/651803/
<hazmat> should probably get cleaned up if its going upstream
<niemeyer> hazmat: Just generated a kanban locally.. apparently it worked
<hazmat> niemeyer, without the patch?
<niemeyer> hazmat: Yeah
<niemeyer> hazmat: Just run the code I had lying in my disk, without doing anything else
<hallyn> SpamapS: huh, today 'ensemble add-relation jenkins jenkins-slave' works.  (on my ohter laptop)
<kim0> Howdy folks, Ubuntu cloud days starting in #ubuntu-classroom on the hour .. see you there
<niemeyer> hazmat: Kanban looks good apparently
 * niemeyer => lunch
<SpamapS> hallyn: weird
<hazmat> jimbaker, do you know what value for ZOOKEEPER_PATH is used for a deb installation of zookeeper?
<niemeyer> kim0: Thanks a lot for the class there
<niemeyer> hazmat: None, supposedly
<kim0> niemeyer: oh cool .. glad it went well
<hazmat> niemeyer, cool, that works, thanks
<niemeyer> hazmat: np
<kim0> niemeyer: smoser + roaksoax are doing the one after the current, on orchestra+ensemble integration 
<kim0> should be cool :)
<niemeyer> Wow, neat
<niemeyer> fwereade: That may be nice to watch out too ^
 * SpamapS will tune in
<RoAkSoAx> kim0: should be but afaik, the installer is broken or the kernel or something so we wont be able to demo it
<RoAkSoAx> :(
<kim0> :/
<kim0> Should be ok .. really wanted to see it 
<kim0> but no worries
<kim0> RoAkSoAx: an in-depth explanation should be great though :)
<kim0> I want to understand all about it :)
<kim0> thanks a lot for the sessions
<RoAkSoAx> hehe :)
<RoAkSoAx> soon will fully work
<noodles775> Hi people. I'm installing libapache2-mod-wsgi as part of an install script for a formula, but it keeps failing, and when I debug the hook, I see http://paste.ubuntu.com/651735/
<noodles775> afaict, it's related to this package requiring both python 2.6 and 2.7, even though I don't need 2.6 (https://groups.google.com/forum/#!topic/modwsgi/M1AZ5HHb3rY)
<noodles775> I realise it's not strictly an ensemble question, but was wondering if anyone else has hit this when deploying?
<niemeyer> noodles775: Hmm
<niemeyer> noodles775: That's super strange indeed
<niemeyer> noodles775: It's worse than a package requiring both of these versions.. apparently Python 2.6's UserDict is managing to import Python 2.7
<niemeyer> noodles775: I've never seen this happening.. must be a bad environment somehow
<noodles775> Hrm... it's reproducible (you can see the WIP recipe here: https://code.launchpad.net/~michael.nelson/open-goal-tracker/ensemble_deploy/+merge/69078 ). 'tis strange, though I was hoping if I could somehow disable python2.6 I could avoid it :/
<noodles775> s/recipe/formula.
<noodles775> (without having to repackage libapache2-mod-wsgi into a ppa or similar - afaics, it's the only package requiring 2.6)
<niemeyer> noodles775: Checking that out
 * noodles775 tries cutting down the recipe to just install libapache2-mod-wsgi to verify the cause...
<noodles775> thanks niemeyer 
<SpamapS> Seems like that would be a bug in python2.6 or python 2.7 if they were interfering with one another
<fwereade> gtg for the day, *might* be able to pop on later
<noodles775> niemeyer: fwiw, I can reproduce it with a minimal install script (just installs apache2 libapache2-mod-wsgi): http://paste.ubuntu.com/651927/
<niemeyer> noodles775: Can you find abc.py under lib/python2.6?
<noodles775> gar, I've just apt-get upgraded (in case newer packages help), but will check when it finishes.
<niemeyer> noodles775: Cool, I also suggest attempting to install python2.6 in isolation beforehand
<niemeyer> noodles775: There may be some relations borked up
<niemeyer> noodles775: Note this:
<niemeyer>  libpython2.6 depends on python2.6 (= 2.6.6-6ubuntu7); however:
<niemeyer>   Package python2.6 is not configured yet.
<niemeyer> and dpkg: error processing python2.6 (--configure):
<niemeyer>  dependency problems - leaving unconfigured
<noodles775> After upgrading, my install hook runs without issues.
<niemeyer> noodles775: I'm pretty sure these packages are in a bad state
<noodles775> Yeah, it sounds like it.
<niemeyer> noodles775: Or were, anyway
 * noodles775 updates formula to apt-get upgrade before install.
<_mup_> ensemble/security-acl r292 committed by kapil.thangavelu@canonical.com
<_mup_> node acl absttractions, reworked to utilize a retry_change like pattern for concurrent updates
<noodles775> niemeyer: oh well :/. It actually fails with the same error during apt-get upgrade :/. http://paste.ubuntu.com/651938/ I'll have to leave it there for now, thanks for your help.
<niemeyer> noodles775: This is really messed up :(
<niemeyer> noodles775: Do you have anything about PYTHONPATH in the environment?
<niemeyer> noodles775: It shouldn't be the case, but I guess it's the only other reason besides a problem in the package itself
<noodles775> niemeyer: nope, that install hook is doing apt-get update followed by apt-get upgrade before anything else.
<niemeyer> noodles775: Yeah, I'm just curious if Ensemble itself might be setting these vars
<noodles775> ah, checking.
<hazmat> hmm.. it might be
<noodles785> niemeyer: http://paste.ubuntu.com/651941/
<niemeyer> noodles785: Try unsetting PYTHONPATH
<niemeyer> noodles785: At the top of the script
<niemeyer> noodles785: I bet that's the error
<noodles775> niemeyer: it certainly enables apt-get upgrade to complete without errors. Updating the formula to retry it from scratch.
<_mup_> ensemble/security-acl r293 committed by kapil.thangavelu@canonical.com
<_mup_> if principal not found in token db, raise a principalnotfound error instead of a keyerror.
<m_3> hazmat: you mentioned you have a mongodb formula?
<niemeyer> OMG, have three different events to claim expenses for
<_mup_> ensemble/security-acl r294 committed by kapil.thangavelu@canonical.com
<_mup_> remove ACL.update_grant, doesn't have a use case atm.
<hazmat> m_3, http://kapilt.com/files/mongodb-replicaset-formula.tgz
<m_3> hazmat: awesome, thanks!
<hazmat> m_3, i haven't worked on it since the mongodb conference, i was structuring the sharding and router as separate formulas
<hazmat> niemeyer, speaking of which how was the mongodb conference in san paulo?
<niemeyer> hazmat: It was fantastic
<hazmat> niemeyer, good attendance? any talk highlights?
<niemeyer> hazmat: Great meeting some of the 10gen engineers, good talks overall
<hazmat> niemeyer, nice, yeah.. i think talking to the 10gen guys definitely made the dc one worthwhile for me.
<niemeyer> hazmat: One talk on the internals was interesting for me in terms of having new information.. other talks were interesting in the sense of getting to know some of the people using it
<niemeyer> hazmat: a pretty large local TV station picked it for high-performance tasks, for instance
<niemeyer> hazmat: also got a nice quote from one of the engineers
<niemeyer> hazmat: http://labix.org/mgo
<_mup_> Bug #816108 was filed: Ensemble needs a high level ACL api/abstraction <security> <Ensemble:New> < https://launchpad.net/bugs/816108 >
<SpamapS> bcsaller: have you merged in the deploys tuff for configs yet? I want to make sure it gets into tomorrow's build so the weekly Oneiric upload has configs... definitely by Thursday since I'm including mention of the feature in my talk.
 * SpamapS REALLY needs to finish these slides with more pictures. :-P
<noodles775> Is there a reason why some instances are incredibly slow? When deploying this last time, it took ~10mins for the state to update from null->started, and it's been a while since now... still no install hook :/
<SpamapS> noodles775: are you using m1.small or t1.micro?
<SpamapS> noodles775: I've found that t1.micro are unbelievably unreliable w.r.t. speed. m1.small's also disappoint quite a bit, especially w/ anything CPU hungry at all.
<noodles775> SpamapS: just the default m1.small, but I've not changed it... I'm just comparing to previous runs, so yep, perhaps it's a reliability issue?
<SpamapS> noodles775: I think its actually luck of the draw.. my theory is that sometimes you get machines without adequate memory bandwidth.. but its just a theory. :-P
<hazmat> SpamapS, its merged i believe re deploy wth config
<statik> oi
<statik> are there directions written down somewhere for getting an ensemble developer environment set up? I was thinking about having a stab at a bite size bug
<jimbaker> statik, check out https://ensemble.ubuntu.com/docs/drafts/developer-install.html
<statik> jimbaker, thanks!
<statik> jimbaker, are the versions of txaws and txzookeeper in natty new enough, or is trunk definitely needed for those?
<_mup_> ensemble/robust-hook-exit r285 committed by jim.baker@canonical.com
<_mup_> Initial mock test for hanging process
<_mup_> ensemble/robust-hook-exit r286 committed by jim.baker@canonical.com
<_mup_> Initial mock test for hanging process
<jimbaker> statik, i'm not certain about their update schedule... for those dependencies, i'm also running trunk
<SpamapS> statik: ppa:ensemble/ppa ftw
<SpamapS> you don't need trunk
<statik> SpamapS, perfect, I see that has the versions from oneiric backported
<SpamapS> and natty txaws is fine as long as you don't want to use Eucalyptus/Openstack
<SpamapS> txzookeeper has problems in natty
<SpamapS> statik: thats a daily build PPA .. but trunk has been quite well cared for thus far. :)
<statik> cool
<jimbaker> SpamapS, seems like a reasonable plan - to use the PPA for the dependencies
<SpamapS> statik: oneiric has a weekly upload of ensemble .. in theory ;)
<SpamapS> jimbaker: yeah, thats a pretty common scenario
<SpamapS> jimbaker: drizzle developers even had a package.. drizzle-dev .. that just pulled in all the build deps
<SpamapS> anyway, dentist time.. :-P
<jimbaker> SpamapS, my favorite part of going to the dentist is that i always schedule it at the same time slot w/ my kids
<jimbaker> my son enjoys looking in my mouth for some reason ;)
<hazmat> hmm.. it look like when the unit tests use a deb installed zk, they don't reset state directories properly
<hazmat> nevermind, looks like i had a background zk from the pkg install
<hazmat> killing that and it works properly
<niemeyer> noodles775: Did it work?
<niemeyer> hazmat: Ugh
<hazmat> niemeyer, yeah.. that's going to bite folks in the future
<niemeyer> hazmat: We have to dump something to avoid having it on
<niemeyer> I'm breaking off for a moment.. will be back later today to finish the expense reporting drama and unblocking for more useful stuff tomorrow
<_mup_> ensemble/security-otp-principal r291 committed by kapil.thangavelu@canonical.com
<_mup_> use a class method to specify test ace, allows for better test usage when OTPPrincipal used by other components.
<_mup_> ensemble/robust-hook-exit r287 committed by jim.baker@canonical.com
<_mup_> Robust test of reaping a hanging process
<_mup_> ensemble/security-acl r296 committed by kapil.thangavelu@canonical.com
<_mup_> remove some debugging statements, update tests to specialized exceptions
<hazmat> niemeyer, when you have time (perhaps tomorrow) i'd like to discuss some of the lxc work, i don't really see how we're getting around the network issues of multi-unit machines on ec2
<hazmat> afaics doing the machine provider for lxc as local dev is the appropriate local dev solution, with specialization of serviceunit deployment by provider type
<hazmat> setting up a vpn for the container doesn't really address the useability issue
<niemeyer> hazmat: Ok.. what's up
<hazmat> we can separately address using openswitch or other tunnels, but that's not core or at incompatible with delivering the local dev story via an lxc machine provider
<niemeyer> hazmat: They're two different code paths, with two different deployment models, incompatible with each other
<niemeyer> hazmat: What's the actual issue you're seeing with deploying locally as we discussed?
<hazmat> niemeyer, i'm trying to understand two things.. one) what are we trying to deliver for the next release.. per my understanding that's a local dev story
<niemeyer> hazmat: Yes, we're trying to deliver local development
<hazmat> two) how are we overcoming the networking issues with ec2 and multi-unit container based machines.. and is it even worth doing in the ec2 provider
<niemeyer> hazmat: We're not doing EC2
<bcsaller> SpamapS: yes, its merged, was at lunch, sorry
<niemeyer> hazmat: we're addressing (one) for now
<SpamapS> I was thinking about this and I think we can do it with the same code path. The provider should actually determine what the best way to setup a "container" is inside of one of its machines. In this manner, we can have the local provider say "use the LXC container for my units" and the ec2 one can say "use the noop container for my units" .. then when we figure out how best to do multiple units per machines on ec2, we can make LXC the default.
<niemeyer> SpamapS: Exactly!
<hazmat> niemeyer, the base layer for both is an lxc abstraction which is common, the differentiator is whether we want a machine provider or is this an encapsulation with the machine agent as it deploys units
<hazmat> the latter allows for a common code path, but leaves the ec2 problem unresolved
<SpamapS> indeed it does
<niemeyer> hazmat: Indeed..
<SpamapS> but allows flexibility
<SpamapS> without compromising simplicity
<niemeyer> hazmat: Imagine we had IPv6 working.. it'd be trivial, right?
<hazmat> niemeyer, that's not entirely clear to me, we'll need 6to4 for common accessibility by remote consumers
<niemeyer> hazmat: No.. imagine we had *IPv6* working..
<hazmat> niemeyer, we'd still have to bridge to something that's providing the address 
<niemeyer> hazmat: Why?
<hazmat> we need routable addresses
<hazmat> or maybe i'm unclear when you say *ipv6* working, is this the future perfect world where external clients are consuming ipv6 services?
<niemeyer> hazmat: Yes, and IPv6 offers that..
<SpamapS> If amazon allowed multiple IPv6 addresses per machine, you'd just forward the packets that weren't for you on to the container virtual network. Right?
<niemeyer> hazmat: In an IPv6 world, each machine generally gets a block rather than an individual address
<hazmat> niemeyer, it stills an ipv6 nat afaics, if the provider is only giving out one address... is there some documentation for that?
<hazmat> my understanding is that you can trade in ipv4 addresses for an ipv6 address block but that's a policy allocation, not a device address notion
<SpamapS> I'd imagine that, as niemeyer is suggesting, when Amazon rolls out IPv6, they'll do away with the internet/external distinction they use now, and just meter traffic as it traverses their borders.
<SpamapS> Given that, they'll most likely also give the VM a block of 8 or 16 or maybe even 255 IPv6 addresses.
<hazmat> i guess i need to do some ipv6 research
<SpamapS> hazmat: I don't think its all that different. Just more available addresses so NAT is no longer a good idea or needed.
<niemeyer> hazmat: http://en.wikipedia.org/wiki/IPv6_address
<SpamapS> The detail that is more pressing is simply how to make sure Ensemble doesn't need refactoring when that magic day arrives where all endpoints on the net are IPv6 capable.
<niemeyer> hazmat: E.g.
<niemeyer> hazmat: "
<niemeyer> Each RIR can divide each of its multiple /23 blocks into 512 /32 blocks, typically one for each ISP; an ISP can divide its /32 block into 65536 /48 blocks, typically one for each customer;[17] customers can create 65536 /64 networks from their assigned /48 block, each having a number of addresses that is the square of the number of addresses of the entire IPv4 address space, which only supports 232 or about 4.
<niemeyer> 3Ã109 addresses.
<niemeyer> "
<SpamapS> Who is to say how far out that day is? Exhaustion is estimated between 2 and 7 years away from the articles I've read.. depending on whose stats and projections you believe.
<hazmat> i thought we had already hit the tipping point on exhaustion
<hazmat> http://en.wikipedia.org/wiki/IPv4_address_exhaustion
<niemeyer> SpamapS: Right, agreed..
<niemeyer> SpamapS: Yeah, it's over already.
<SpamapS> The exhaustion that has happened is merely that IANA has assigned all ASN's to regional bodies.
<niemeyer> SpamapS: Agreed on the fact we have to make sure Ensemble doesn't need significant refactoring by then
<hazmat> the refactoring we're talking about is minimal
<SpamapS> There's still hundreds of millions of addresses, and many many thousands of blocks to dole out, before they start culling unused blocks and clamping down on ISPs that waste them.
<niemeyer> hazmat: Yes, we can use the tunneling on EC2 for when we actually look at this feature, while keeping the real end goal in mind the whole time
<hazmat> its not even refactoring, its implementing the feature
<niemeyer> hazmat: It's kind of dropping code
<niemeyer> hazmat: If we make it work with tunneling, one day we can just drop the logic which sets the tunneling in place
<niemeyer> The tunneling is somewhat generic, so it shouldn
<niemeyer> 't even be too much work
<SpamapS> niemeyer: Make no mistake though, if you make people use tunneling, they will *reject* it hard. If you offer it as an option.. that may be a different story.
<niemeyer> SpamapS: It will be an option, and right now it will be non-existent, so don't worry yet. :)
<SpamapS> I did yoga this morning.. my worry is at least stayed until tomorrow. :)
<hazmat> the emails that have been exchanged on this are larger than the work by an order of magnatitude imo
 * SpamapS searches for funny pictures to depict "The Cathedral and The Bazaar" for his presentation..
<hazmat> i think the contortions we're going to for policy assignments with a machine provider (local) with a single machine are worse than those that come out from specializing unit deployment by provider.. which we have to do anyways!
<hazmat> so why its not clear why we're investing time discussing a future solution in a future world, that doesn't address the current problem we have to solve, and whose implementaiton differential is minimal
<hazmat> we're not going to use ipv6 on a local host provider.. we just want to enable local development story.. so we're not tunneling or natting for local dev, so i'm missing the value for not just treating the containers like machines
<hazmat> this all sounds like the perfect is the enemy of the good
<niemeyer> hazmat: We're equal grounds.. I also don't understand why you want to do work that is not necessary
<niemeyer> hazmat: rather than doing work that is needed anyway
<niemeyer> hazmat: Being able to choose between deploying in LXC or without LXC, as suggested by SpamapS, is needed for all the deployment methods
<hazmat> niemeyer, because what it is needed for isn't decided
<niemeyer> hazmat: Except the local one
<niemeyer> hazmat: I miss what you mean with that
<hazmat> we haven't resolved how this mechanism can be used in ec2
<niemeyer> hazmat: What do you want to know about this?
<niemeyer> hazmat: I'd rather not have to debate about details we won't be implementing right now, but I see that for some reason this is very important
<hazmat> and it implies just shifting the maintainance burden onto another shared component, the unit placement algorithm, instead of localizing it to the machine provider
<hazmat> since now we'll have a provider with exactly one machine.
<niemeyer> hazmat: A provider that has to know all the details of using LXC isn't used in any other deployment method besides the local one
<niemeyer> hazmat: An agent that is able to deploy units within LXC is useful in all deployment methods
<niemeyer> hazmat: That's exactly the debate that created the huge thread which you said wasn't necessary
<hazmat> niemeyer, i didn't say the thread wasn't nesc. just that the implementation that's being discussed could have fit in a small portion of it
<hazmat> its good to have the discussion, but i'm unclear that we have resolution, as i'm approaching implementin git
<niemeyer> hazmat: well.. and the proportion increases as we speak :)
<niemeyer> hazmat: It's pretty clear in my mind, at least.   Naturally, nothing is settled on stone and it's just my opinion, but I don't see any arguments that contradict these.
<SpamapS> I would say might be.. ;)
<SpamapS> err
<SpamapS> oops I scrolled back
<SpamapS> hazmat: to get down to brass tacks, what I suggest is that each provider chooses how to contain service units and how many service units per machine are allowed. Its more invasive than the provider method..
<SpamapS> hazmat: but if we decide chroots are a better choice for EC2 .. it enables that quite nicely
<SpamapS> That said, there is only one containment method known and desired at the moment.. and I'm wont to add abstraction layers for what-if's.
<hazmat> as i said we have two approaches, a local machine provider that uses containers as a machine, and the second one where the local provider has exactly one machine, and the machine agent deploys service unit as a containers. both solutions need a local machine provider, the second also needs service unit deployment specialization, and creates additional machine-unit placement constraints that seem rather arbitrary like a provider with a max of one mac
<hazmat> hine. The real question of how to realize the benefits of all seem to revolve around future notions of introducing additional network layers and complexity.. i'd argue that simplicity should win, and we should use the right tool for the job at hand, rather than trying to achieve some perfect symmetry that is illusory imo anyways
<niemeyer> SpamapS: They are orthogonal.. it's not the provider that should decide, it's the user
<niemeyer> SpamapS: I mean, the capacity of deploying in a chroot or in an LXC should be at the user's disposal
<SpamapS> Sure, if we make it configurable thats fine, though I'd suggest we just be opinionated about what the default containment method and capacity for machine-splitting is for each provider.
<SpamapS> If somebody comes up with a good one for EC2, then we can make it the default.
<niemeyer> hazmat: Heh
<SpamapS> And if somebody comes up with a narrow-use-case-satisfying version, we can make it usable in that case when the user decides.
<SpamapS> BUT
<niemeyer> hazmat: Complaining that what I say is "perfect symmetry that is illusory imo anyways" is non-constructive FUD
<SpamapS> I'm suggesting that we are just yak shaving at this point.
<niemeyer> hazmat: I'm providing details about why that is the case..
<SpamapS> I like flacoste's suggestion that we just do both, and learn from it. :-P
<niemeyer> hazmat: Knowledge about LXC is involved.. all the details of how the machine communicate with it, start, stop, boot, kill, restart, etc.. must exist
<niemeyer> hazmat: If you stuff that knowledge into a local provider that is only concerned about deploying in a local machine as if they were EC2 instances, that's not useful anywhere else
<niemeyer> hazmat: If we teach the agent to deploy units within LXC and maintain them, that's useful across the board, *even* if there is additional logic that must be implemented for routing packets properly
<SpamapS> niemeyer: Oh but it is. :) Its already split out into an 'LXCControl' class, which makes functional testing easier.
<niemeyer> hazmat: If you disagree, please provide some details that my help me understand why that is so, rather than calling it illusory.
<SpamapS> niemeyer: create a machine with this cloud init.. start it, stop it, destroy it .. list running machines .. thats useful in all contexts that you need to use LXC
<niemeyer> SpamapS: Yes, it is.. the command line tools are also shared.. the question is how much is being shared.
<SpamapS> Heh.. its just an adapter. Very little code.
<niemeyer> SpamapS: We shouldn't go down a route because it's worse..
<SpamapS> I did sit down to do it at the machine agent/unit agent spawn level.. will be a lot more code than the provider was.
<niemeyer> SpamapS: We'll have to implement either approach, one of them allows more sharing, more symmetry on deployments, etc.
<niemeyer> SpamapS: If it doesn't work because it's too involved, or whatever, we can rethink.  But we shouldn
<SpamapS> Have to write new cloud-init stanzas that we never needed before.
<niemeyer> 't go down that path because it's "only slightly worse"..
<SpamapS> No it will work.
<SpamapS> For the local case, its not that different in results.
<SpamapS> For the EC2 case, we'll need to add NAT to make them addressable, or tunneling, or something.
<niemeyer> Yes, we'll have to add something.. and reuse what will hopefully already work by then.
<niemeyer> Rather than reimplementing it.
<SpamapS> Of course, as hazmat is arguing.. you can reuse what already works now and tackle that use case when you get there, and as Francis suggested, you'll have more info, so you can embrace LXC more completely when the time comes.
<niemeyer> SpamapS: There's no doubts about what we'll have to do by then.. we'll have to enable Ensemble to deploy unit agents in LXC.
<niemeyer> SpamapS: I'm suggesting we do research now to know in detail what that means, and if we have to fall back we take an informed decision.
<niemeyer> SpamapS: I don't think I'm asking too much, honestly.
<SpamapS> Yeah, lets just implement this and be done with it.
<niemeyer> SpamapS: We have to do A or B.. A may be easier, but B is needed anyway.  If we have to do A, it must be for a good reason which needs to be understood, because we'll be wasting time doing B anyway.
<niemeyer> SpamapS: I offered to do that work myself.
<SpamapS> As long as EC2 uses the "exec" container method until we do that research, I really don't care. Just give me local dev, and give it to me soon. I spent $300 last month on EC2 because of forgotten instances and 15+ node demos.
<SpamapS> Also launchpad is chomping at the bit.. lifeless is trying to put together launchpad formulas and unwilling to deploy LP into EC2 over and over again.
<SpamapS> Another thing came up with containers btw.. what about services that are memory intensive? Can LXC actually limit memory usage and does it expose that through /proc/meminfo ?
<SpamapS> memcached comes to mind.. it really should use almost all the RAM on the box... we could arguably have it default to do just that in the formula.
<niemeyer> SpamapS: Not sure, but it's worth considering it indeed
<hazmat> my network disconnected, not sure what got lost..
<hazmat> niemeyer, if you want to do that experimentation, i'm happy to not do the lxc stuff, and instead do some pending refactorings  (the protocol stuff) and bugfixes in the codebase
<hazmat> but as far as implementing lxc and what we need today, i don't see the justification for doing things in such a way that we're going to ripple change costs throughout the system.. be it as simple as getting useful output of ensemble status, or dealing with unit machine placement, when we have unresolved questions of how its going to be used or implemented
<niemeyer> hazmat: We'll have to debate this again.. it has changed several times, and I'm not sure that's feasible now.
<niemeyer> hazmat: Someone will have to do the repository work.. do you want to do that instead?
<niemeyer> hazmat: If you have unresolved questions, please ask them.
<niemeyer> hazmat: I've been presenting why and how for some time now..
<niemeyer> hazmat: and in fact we had a call last week where we discussed exactly this
<niemeyer> hazmat: I'm really expecting a little bit of more insightful feedback now.
<hazmat> niemeyer, indeed, i've been thinking about since
<niemeyer> hazmat: What has to be changed then?  Do you have that written down?
<niemeyer> hazmat: This is not a fight, we just have to think through..
<niemeyer> hazmat: If we can't do it, let's not.. but we can't decide things by arguing over and over without some insight into the problem.
<hazmat> the fundamental of how we get around the network issue for ec2 providers, seems to be unresolved, the things proposed tunneling, vpn are about adding complexity vs. just utilizing the api that the provider gives us. be it ec2, where atm we only have one public address per machine, or where the openstack guys are digging into openswitch etc, ensemble isn't a provider, its not clear we should be setting up secondary networking infrastructure on virtua
<hazmat> lized systems with poor i/o throughput
<niemeyer> hazmat: Forget the EC2 providers..
<niemeyer> hazmat: We're not implementing them now.
<niemeyer> hazmat: We want to implement local support through a mechanism that will be useful in the future in EC2.
<niemeyer> hazmat: That's not the same as implementing support in EC2 now,.
<SpamapS> hazmat has the same concern that I do. That we will ever do it.
<niemeyer> hazmat: We don't need to implement tunneling
<niemeyer> SpamapS: That's fine.. maybe we'll never do.. but that's how developing large software works.
<SpamapS> To alay that concern, I'd say if we don't do LXC .. we'd do something else like chroot with collision avoidance mechanisms.
<SpamapS> I'm not sure large is desirable.
<niemeyer> SpamapS: Heh.. any other bikesheds for us to get into?
<niemeyer> I'm honestly trying to be positive and helpful in that debate
<niemeyer> But we can't just argue across each other, or it'll be hard to be effective
<hazmat> niemeyer, the same underlying abstraction (lxccontrol) done today for an lxc machine provider, makes the lxc service unit deployment  in the future fairly trivial compared to the costs of implementing these network solutions afaics
<SpamapS> I think we only differ on the perspectives. In my perspective, something that is so far off shouldn't be worried about in coding now. From yours, it must be worried about in coding now. I get the position, but I stand in another one.
<niemeyer> hazmat: What is the cost of implementing an agent able to deploy LXC as an option, _today_? 
<niemeyer> hazmat: Please answer that question so taht we can move forward.
<hazmat> niemeyer, we need an lxc control abstraction for start/stop/exec command.. but it also ripples into things like ensemble status and machine-unit mapping.
<niemeyer> hazmat: Ok, please provide an actual written down version of this with details.
<hazmat> we'll also need service unit deployment specialization by provider type to avoid using this for certain providers
<SpamapS> niemeyer: I made a meager attempt to ballpark that. You have to create cloud-init rules that will start the unit agent the same way the machine agent is started, and then factor in a way to have it work different for local vs. ec2 provider.
<niemeyer> hazmat: Yeah, we'll have a base class which is pretty dumb
<niemeyer> hazmat: Just opening the formula onto the directory
<niemeyer> hazmat: We can, at some point, have other options such as plain chroots, etc
<SpamapS> The former is fairly independent code and will be useful in any container/vm strategy used. The latter is invasive at the provider level.
<niemeyer> hazmat: and maybe that's even a way for us to start
<niemeyer> SpamapS: No, there's no point in using cloud-init.. that's why I didn't take the idea seroiusly
<SpamapS> There's another option which is to let lxc just start the unit agent directly, but I don't like that one as it won't actually "boot" the container.
<niemeyer> SpamapS: Agreed, ideally we'll boot it
<SpamapS> So you'll end up with a machine that is not actually in the same state as a non container machine. Networking needs configuring, services starting, etc.
<hazmat> SpamapS, the unit agent could just be a upstart rule at that point
<SpamapS> Yeah you can drop it on the disk too, thats fine. The point is that code is sort of easy and independent.
<SpamapS> The refactoring of providers needs some thought, so that each provider knows what type of container it should provide.
<SpamapS> Thats as far as I got.
<SpamapS> negronjl: hey, didn't you create a mongodb formula?
<negronjl> hi SpamapS:  I did.  let me find out wher I put it.
<SpamapS> negronjl: I want to mention it in my talk on Thursday.. just that it exists.
<negronjl> SpamapS:  not in bzr yet.  still refining it.
<SpamapS> negronjl: ok, no worries I'll leave it off
<hazmat> i dropped a dump of mine over at http://kapilt.com/files/mongodb-replicaset-formula.tgz
<SpamapS> hazmat: didn't you have a riak formula too?
<hazmat> SpamapS, nope. my riak one is a skeleton
<SpamapS> ah ok
<adam_g> SpamapS: there is a rabbitmq-server formula incoming today or tomorrow
<SpamapS> adam_g: w00t.. yeah I already threw their logo up. :)
<adam_g> SpamapS: is it okay for me to just push that directly to principia or file a merge bug?
<SpamapS> adam_g: just push it in as lp:principia/rabbitmq-server .. :)
#ubuntu-ensemble 2011-07-26
<_mup_> ensemble/robust-hook-exit r288 committed by jim.baker@canonical.com
<_mup_> Comments and better MATCH for mocking
<_mup_> ensemble/robust-hook-exit r289 committed by jim.baker@canonical.com
<_mup_> PEP8 and better test name
<jimbaker> adam_g, did the robust-hook-exit branch work for you?
<jimbaker> in any event, i have pushed up a version of this branch that i'm going to now submit for review, now that it has comprehensive testing
<jimbaker> (unless of course it doesn't actually work ;) )
<jimbaker> adam_g, i did find one bug in the reaping mechanism, but i suspect it would not have actually impacted your use, just a resource exhaustion problem for long-term running in the machine agent
<jimbaker> and again fixed in this most recent push
<adam_g> jimbaker: sorry. yes, i did test it on friday and it worked just fine. 
<jimbaker> adam_g, cool. if you could please try the latest version of the branch, which i just pushed for review, that would be very helpful
<jimbaker> i believe in a nutshell we can model such bad hooks as something like this: sleep 0.05 && echo "Slept for 50ms" && sleep 1 && echo "Slept for 1s" && sleep 1000000 &
<jimbaker> the & at the end be the operative part to fork, but not to close file descriptors in the child
<SpamapS> hmm.. toying with a Venn diagram explaining why service orchestration is not config management, but encompasses some of it..
<SpamapS> http://spamaps.org/files/venn-config-service.png
<SpamapS> The suggestion there is that the blue stuff on the right is what is being bolted on to config management..
<adam_g> SpamapS: what do you mean 'bolted on'?
<SpamapS> adam_g: meaning they're being added as afterthoughts.
<SpamapS> adam_g: I don't even think, actually, that any of the free cfg mgmt solutions have a built in coordination method.. they just fake it by running themselves over and over.
<SpamapS> adam_g: and the elasticity comes from that paradigm too
<adam_g> SpamapS: i dont know about chef, but the way puppet traditionally worked was, you install a central puppet master that hosts the configuration, and an agent sleeps on client machines, awakens at interval and to see if they need updates
<adam_g> err, checks in to see..
<SpamapS> Right, and there's no telling who will see the changes first unless you force a run, right?
<adam_g> right
<adam_g> well
<adam_g> using mcollective, now, though
<adam_g> you essentially "turn off" puppet on the clients
<SpamapS> right, that new bolt on that is, IMO, a healthy competitor to ensemble. :)
<adam_g> and dispatch puppet runs via mcollective, which is running on the clients
<SpamapS> because a lot of the time you need to upgrade clients first, then update the server to use some new feature..
<adam_g> zright
<SpamapS> I've felt for a while that puppet is nothing like Ensemble, and mcollective is a little like ensemble. :P
<adam_g> what puppet realy lacks IMHO is coordination between systems, especially for encforcing ordering dependencies with regards to state changes
 * adam_g agrees 100% 
<adam_g> :)
<SpamapS> Ensemble ends up having that by the nature of doing things in a service oriented fashion.. but the upgrade-formula hook is, IMO, a bit weak for ongoing coordination.
<SpamapS> I really need to be able to enumerate and act on relations in upgrade-formula.. so I can wake up the related pairs and tell them about any new necessary info.
<SpamapS> This is where mcollective's simpler approach wins out.. because all it has to do is dispatch puppet.
<_mup_> Bug #816264 was filed: PYTHONPATH causing corrupt environment <Ensemble:New> < https://launchpad.net/bugs/816264 >
<kim0> oh we now have config-set \o/ yaay
<fwereade> hey all
<fwereade> is there a way to give a merge proposal more than one prerequisite branch?
<hazmat> fwereade, no..
<fwereade> hazmat: does that imply I'm Doing It Wrong? ;)
<hazmat> fwereade, logically no.. typically i use a straight line of deps for merges.. but that's tool limitation imo.
<fwereade> hazmat: I resisted the urge to just keep everything in one straight line
<fwereade> hazmat: the thing that's bugging me is it's only going to get worse
<hazmat> fwereade, :-)  bzr-pipeline is a nice bzr plugin that can automate some of the tedium of managing a straight-line of dependent branches
<fwereade> hazmat: the next thing I want to do wants to build on the results of 2 independent pipelines
<fwereade> hazmat: mreging trunk updates through the whole pipeline and suchlike?
<hazmat> fwereade, i'm using it for the security work right now, which is like 7 branches atm, pipeline helps me automate pushing changes through, as well keeping a single checkout workspace for the work on all the branches 
<hazmat> fwereade, yup
<fwereade> hazmat: cool, I'll check it out
<fwereade> hazmat: can I retrofit it onto an existing pipeline or does it need to remember its own state?
<fwereade> hazmat: actually, now I think of it, I can suspend work on that feature and go back to the last one I suspended because it was breaking my brain (I think my brain has recovered a bit now ;))
<hazmat> fwereade, you can retrofit an existing *branch* with just bzr reconfigure-pipeline, you can retrofit an existing pipeline but its effectively just adding a new branch and merging the existing branch to it..
<hazmat> i also appreciate the single workspace model for working with multiple branches
<fwereade> heh, I think I'd go even more insane without separate workspaces
 * hazmat looks around for pre-review coffee
<hazmat> fwereade, out of curiosity what editor do you favor?
<fwereade> hazmat: vim
<fwereade> hazmat: and yourself?
<hazmat> fwereade, right on.. i'm an emacs guy.. i believe we now have achieved 50/50 parity on the dev team ;-)
<fwereade> hazmat: haha, nice
<fwereade> hazmat: I had a bad early impression of emacs... a colleague used it and we seemed to spend about 50% of our pairing time fiddling with the settings
<fwereade> hazmat: not really a fair test ofc ;)
<hazmat> fwereade, yeah.. i think i try to keep my settings fiddling to once a year... i know incredibly little about mucking with emacs actually.
<fwereade> hazmat: the bad early impression of vim came from trying to type in command mode, ofc... that was exciting, but at least it wastes minimal time
<hazmat> fwereade, yeah.. my first vim experience was modifying some oracle setup on an ancient solaris, i ended up tracking down a sysadmin for a cheat sheet... i'm not sure that i ever fully recovered ;-)
<hazmat> although i've grown to like solaris since
<fwereade> hazmat: heh
<fwereade> hazmat: I think having someone who knows the tool to hold your hand is probably the critical factor
<niemeyer> GOod mornings!
<niemeyer> What a beautiful review queue
<fwereade> niemeyer: heh :)
<fwereade> niemeyer: good morning
<niemeyer> fwereade: No irony.. it's awesome to come back to a review queue like that.. it'd be better to not have it because everything was already merged, but it's much more pleasant than coming back to nothing
<fwereade> niemeyer: cool :)
<Aram> moin.
<Aram> heh, editor discussions.
<niemeyer> Hey Aram!
<Aram> anyone else using acme or sam? :-).
<Aram> hi!
<niemeyer> Aram: Was just checking out your PDP11 emulator.. crazy stuff :-)
<niemeyer> Aram: I think you'll find mostly emacs/vim users around here
<Aram> yeah, I imagine :-).
<Aram> I actually read all the PDP-11/40 manual and really liked it. very concise and sane compared to 4833 pages of intel manual.
<niemeyer> Aram: How are the Intel manuals when compared to ARM's?
<Aram> never read the ARM manuals. I assume they are at least smaller, as ARM is a very small arch and doesn't have 30+ years of legacy cruft.
<niemeyer> Aram: Yeah, they feel quite pleasant overall
<niemeyer> Aram: I know ARM itself (not the SOC makers) actually sells exactly that kind of thing, so it's somewhat expected that they'd do a good job on it
<niemeyer> Aram: I'm curious if Intel had similar material
<Aram> I would add ARM support in my emulator at some point to test the power of Go inferred interfaces.
<Aram> yeah, Intel has tons of material. very very very very detailed.
<Aram> also, guides for kernel developers and compiler writers.
<Aram> guides for optimizing etc.
<Aram> the only problem is that x86 itself is insane.
<Aram> the manuals are good. very sterile, but good.
<Aram> (though I like the docs coming from AMD more).
<niemeyer> Aram: Nice/not nice :)
<niemeyer> Aram: I note you don't have many tests on the emulator yet (hint!)
<Aram> yeah... that's true.
<Aram> I'm reading about the Go test framework now.
<niemeyer> Aram: It's quite neat, even if a bit slim to my taste
<niemeyer> Aram: I came up with a slightly more comprehensive version when I started playing with it, following more well-known xunit styles (http://labix.org/gocheck)
<niemeyer> Aram: It builds on the standard one, though, rather than replacing it entirely
<Aram> aha, nice. I'll take a look into it.
<wrtp> Aram: i use acme
<Aram> :-).
<wrtp> smiley face every time i use it indeed :-)
<hazmat> niemeyer, g'morning! re review queue, it looks like  kanban generation has gone stale again
<niemeyer> hazmat: Hmm, indeede
<wrtp> PS hi gustavo!
<niemeyer> wrtp: Rog!
<niemeyer> wrtp: How're things going there?
<wrtp> niemeyer: pretty good thanks; just hacking up a website for our wedding guests and realising how terrible i am at html/css...
<wrtp> so many rules, so little clarity :-(
<wrtp> still, it's fun doing it in Go
<niemeyer> wrtp: Hah, yeah.. coming back to HTML is always tricky after getting used to the level of cleanliness/power of  a CLI
<wrtp> niemeyer: more like the first time. CSS didn't exist the last time i wrote a web site :-)
<wrtp> and some people [ahem, looks over shoulder] don't like the "vanilla" look...
<niemeyer> wrtp: Wow, ok, that's been a _while_ :-)
<wrtp> yeah, probably coming up for 15 years :-)
<niemeyer> Kanban is unblocked
<niemeyer> jimbaker: ping
<niemeyer> fwereade: ping
 * hazmat checks out the new golang course
<jcastro> robbiew: ok I am relocated and back 100% now. Went over the slides with bacon and he gave me some stuff to fix.
<robbiew> cool
<fwereade> niemeyer: heyhey
<niemeyer> fwereade: yo
<niemeyer> fwereade: Would you mind to review this branch: https://code.launchpad.net/~jimbaker/ensemble/expose-provider-ec2/+merge/68478
<fwereade> niemeyer: a pleasure :)
<niemeyer> fwereade: This is touching the EC2 provider API, so might be nice to have your view on it since you've been so close to it
<niemeyer> fwereade: Thanks!
<_mup_> ensemble/security-agents-with-identity r297 committed by kapil.thangavelu@canonical.com
<_mup_> hierarchy initialization includes otp container.
<jcastro> any sexy new formulas that people are tackling this week? (already got cloud foundry)
<jimbaker> niemeyer, hi
<jimbaker> fwereade, thanks for looking at the expose-provider-ec2
<fwereade> jimbaker: np at all
<fwereade> jimbaker: I'm happy to see the ugliest code in ec2 is gone now :)
<jimbaker> fwereade, you mean the use of all of those callbacks + authz?
<fwereade> jimbaker: I think we're talking about the same thing, yeah
<fwereade> jimbaker: for some reason it really stuck out ;)
<jimbaker> fwereade, i couldn't really read that code, so i just had to refactor it first. it does feel cleaner
<niemeyer> jimbaker: Hello
<jimbaker> niemeyer, hi
<niemeyer> jimbaker: I have one feedback on this too
<jimbaker> niemeyer, ?
<niemeyer> jimbaker: It doesn't feel like there's a reason to be using classes and inheritance there
<jimbaker> niemeyer, you mean the use of EC2PortOperation for get_machine_group_name?
<niemeyer> jimbaker: Yeah, and also the classes themselves
<niemeyer> jimbaker: Foo(x).do_bar() can easily be do_bar()
<jimbaker> niemeyer, oh sure, that's just following the existing convention. but i agree about it being unnecessary. if i were to do it again, i would not group it this way for the provider ops as a whole
<niemeyer> jimbaker: Conventions are nice until they are not
<jimbaker> niemeyer, absolutely
<niemeyer> jimbaker: I think this should be fixed.. let's not grow up classes which are really just functions
<jimbaker> niemeyer, so in my case, i'm reading through this operations (hey! command pattern. or something like that) trying to think, what the heck is the reason for them
<jimbaker> because generally in python if we have classes they mean something
<jimbaker> but in this case, even the fact that some params go in __init__, others in run, made no sense
<niemeyer> jimbaker: Overengineering in some cases, caused by us trying to figure a good way to organize code there
<niemeyer> jimbaker: Would you mind to reorganize it as functions and see how it goes?
<jimbaker> so it was definitely unhelpful noise because it made it harder to start working with the code
<jimbaker> niemeyer, sounds good, i can definitely pilot what it could look like
<jimbaker> niemeyer, in general i think the way it's done in the dummy provider is nicer
<smoser> niemeyer, its not in go, but https://gist.github.com/1100458 is a "get the right ubuntu ami" tool
<jimbaker> the other thing that i found annoying was the use of __init__.py when we don't use that convention elsewhere in the code
<niemeyer> smoser: Hah, sweet!
<niemeyer> smoser: I'll dump that in my bin and start playing with it
<niemeyer> smoser: any chance of an entry in the contest for that? ;-)
<smoser> maybe
<smoser> is there an sh2go converter ;-)
<smoser> or maybe i'll just write that instead
<ahasenack> smoser: it's in awk, you mean, not shell
<ahasenack> ;)
<smoser> well, yeah, sh2go would have to include implementations of sed and awk and cat and perl and wget....
<smoser> i leave that as an exercise to you
<fwereade> jimbaker: I'm a bit confused about the machine_id thing: I understand we can't use the instance ID, but is there really no way to figure out the machine ID giver a ProviderMachine?
<fwereade> given a ^^^
<jimbaker> fwereade, perhaps the better api would be to have the ProviderMachine know its logical machine id
<fwereade> fwereade: I would personally strongly favour that, but not enough to actually disapprove of the branch ;)
<fwereade> er, I have no idea where I got this habit of talking to myself :/
<jimbaker> fwereade, it looks like i'm going to do more refactoring anyway in light of the conversation i just had w/ niemeyer 
<fwereade> jimbaker: I saw that
<jimbaker> it would definitely clean things up, so that's a good thing
<fwereade> jimbaker: I'm not sure the scope we're talking about though
<jimbaker> fwereade, i don't believe the scope is very much at all, fortunately
<fwereade> jimbaker: cool, as long as we're not planning to change every single operation from a class to a function
<fwereade> because that feels a bit drastic to me
<jimbaker> fwereade, i'm not going to do a general refactoring of the code, but a small one to try out just using functions for the securitygroup instead of java-like commands
<fwereade> jimbaker: cool
<jimbaker> fwereade, if that looks good, the next step is to have a refactoring branch that can apply it to the rest of the code in ec2 provider
<fwereade> jimbaker: sounds good to me
<jimbaker> fwereade, maybe that branch can remove having both MachineProvider and ProviderMachine ;)
<fwereade> jimbaker: <3, that gives me a headache
<jimbaker> fwereade, indeed. i know why that came about, but it must die ;)
<fwereade> jimbaker: anyway, I'l like the machine_id thing to go, but it feels separate to me -- we'll need some independent code to make machine_id gettable given a machine
<fwereade> jimbaker: so I don't think it's a reason to disapprove
<fwereade> jimbaker: but I'm not sure how to track my concern -- just file another bug, and point out that it dirties up the port interface?
<jimbaker> fwereade, i think in general we would use "approve" for a branch at this point if the remaining changes are obvious and agreed upon. but it probably is best to just say "needs fixing"
<fwereade> jimbaker: cool, thanks
<jimbaker> fwereade, so no more bug, just point it out in the review
<RoAkSoAx> fwereade: i.
<RoAkSoAx> fwereade: o/
<fwereade> RoAkSoAx: er, what?
<RoAkSoAx> fwereade: was just saying hi (o/)
<fwereade> RoAkSoAx: heyhey :)
<RoAkSoAx> fwereade: heh how's it going
<fwereade> RoAkSoAx: I ought to recognise translated gestures ;)
<fwereade> RoAkSoAx: not too shabby; yourself?
<RoAkSoAx> fwereade: good good
<RoAkSoAx> fwereade: bug squashing day for me
<RoAkSoAx> fwereade: any updates on the key related stuff?
<fwereade_> RoAkSoAx: hey again :)
<RoAkSoAx> fwereade_: hehe troubles with xchat?
<fwereade_> RoAkSoAx: yeah, today seems to be a stuff-breaking-randomly day
<fwereade_> RoAkSoAx: seem to have VMs actually installing though which is reassuring
<RoAkSoAx> fwereade_: yeah either installer or mirror/proxy issue
<RoAkSoAx> fwereade_: any luck on the key thingy?
<fwereade_> RoAkSoAx: niemeyer explained how config stuff gets to other machines, if I think a mo I might find it again ;)
<RoAkSoAx> fwereade_: cool
<RoAkSoAx> fwereade_: no I think that by the time of the sprint, we should have everything merged in trunk, or, at least have a common branch
<fwereade_> RoAkSoAx: yep, at least there's a way we can do it ;)
<RoAkSoAx> fwereade_: cool, let's coordinate that tomorrow as today I'll be working on unrelated stuff
 * hazmat heads out to lunch
<kim0> TeTeT: glad to see you o/ :)
<SpamapS> m_3: hey, how's that NFS formula looking?
<TeTeT> kim0: the class starts in one hour, right?
<kim0> TeTeT: yes
<kim0> thanks man
<SpamapS> RoAkSoAx: hey, how did the cloud days presentation go? I missed it.
<RoAkSoAx> SpamapS: i think it went well
<kim0> m_3: please ping me when u're back :)
<RoAkSoAx> SpamapS: we couldn't demo anything as there were installation/installer issues
<SpamapS> RoAkSoAx: thats pretty understandable given the number of moving parts involved in bare metal deployment.
<RoAkSoAx> SpamapS: indeed
<adam_g> jimbaker: i used your branch pretty extensively last night while working on some formulas, works well!
<jimbaker> adam_g, glad to hear that!
<niemeyer> Lunch time here
<niemeyer> Back for more reviews in a moment
<SpamapS> Hrm.. so I am running into an interesting situation that I think we need an answer to.
<SpamapS> Basically, I have a service config that affects the relationships a service has. I want to be able to set the database that a service uses.
<SpamapS> adam_g: did you see that service configs have landed?
<adam_g> SpamapS: i haven't but thats cool, show me how to use it over beer :)
<SpamapS> adam_g: Indeed. :) It should let you get rid of any hard codes you have in the openstack stuff
<SpamapS> adam_g: can I say that there are formulas for openstack in my talk?
<adam_g> SpamapS: cool, maintaining that has bitten me a bunch 
<adam_g> SpamapS: yah, definitely! i wanted to actually "release" those formulas out o fmy +junk branches but a bunch of openstack bugs were introduced last week that broke everything
 * SpamapS is almost done w/ slides.. just need to add a few more lolcatz
<SpamapS> adam_g: son of a! ;)
<SpamapS> hrmph.. 13 slides.. 40 minute presentation. I don't know if 3 minutes per slide is a good average
<m_3> SpamapS: sorry, been head-down on a stack for ubuntu-classroom
<SpamapS> m_3: no worries. :)
<m_3> ec2 picks now to be dog-slow
<kim0> m_3: if that demo is broken, feel free to replace with any formula that works :)
<m_3> kim0: thanks
<m_3> kim0: still testing
<kim0> rock n roll
<m_3> kim0: you know how to get the screen bigger in byobu classroom?
<m_3> kim0: it's maxing out at half the screen
<kirkland> m_3: the host session needs to be a bigger terminal
<kirkland> m_3: resize the terminal running the host session, and press F5 in byobu
<m_3> kim0: perfect... thanks!
<SpamapS> m_3: are you practicing or is this going now?
<m_3> kim0: scratch that
<m_3> kirkland: perfect... thanks!
<m_3> SpamapS: at 1800UTC
<m_3> think I'll just set up the stack and talk about it rather than wait for it
<kirkland> m_3: which reminds me, i need to test your updates to byobu-classroom and propose it for principia
<kirkland> m_3: i'll try to do that today
<SpamapS> kirkland: you're a composer, just push :)
<m_3> kirkland: simple update... oneliner
<kirkland> SpamapS: k
<kirkland> m_3: kewl, just want to retest
<fwereade_> must dash, later all
<SpamapS> http://spamaps.org/files/ensemble-oscon-2011.odp
<SpamapS> Note that it doesn't make much sense w/o the notes
<niemeyer> SpamapS: Is it done, or is it to-do?
<SpamapS> niemeyer: Thursday afternoon.
<SpamapS> to-do
<SpamapS> http://www.oscon.com/oscon2011/public/schedule/detail/18367
<niemeyer> SpamapS: Awesome!
<niemeyer> SpamapS: Re. config vs. relations, yeah, we'll need dynamic relations at some point
<SpamapS> niemeyer: I think its a corner case that can be dodged right now
<SpamapS> niemeyer: but one that will come up as diagrams get bigger and bigger
<SpamapS> I wonder if there couldn't just be a command... refresh-relations .. that just tells the local unit agent to re-run all the hooks as if the relation data had changed.
<niemeyer> SpamapS: Absolutely.. I don't have a good design in mind for it yet, to be honest, but I think we can do something good
<niemeyer> SpamapS: It's a bit tricky because we have to take into account dependency resolution too
<niemeyer> SpamapS: So in terms of ordering, we should get dependency resolution in first, so that we can implement this in light of the understanding that dep-resolv will provide
<SpamapS> Yeah.. I'm less excited about dependency resolution than you guys are. :)
<SpamapS> I can't really articulate why tho
<niemeyer> SpamapS: Well, it's not that I'm terribly excited either.  I'm much more excited about stacks
<niemeyer> SpamapS: But feels like a very nice convenience we want to get in place
<SpamapS> Yeah, stacks!
<hazmat> i think dynamic relations i think config based dependencies for generic formulas, that run apps, and have app spec'ified deps
<niemeyer> hazmat: parsing failure :)
<SpamapS> I think hazmat was just using lossy compression
<SpamapS> up the bitrate a bit
<hazmat> connection loss
<hazmat> but just as a common example a rails or wsgi app that checkouts from vcs, and deploys the app.. but it has to assume currently a set of optional/required deps representing app infrastructure choices, does it use memcached or postgres v. mysql v. nosql.
<hazmat> really their runtime dynamic deps, and what we capture are static ones
<niemeyer> hazmat: Right.. that's exactly the context
<hazmat> niemeyer, effectively we're another cli api.. for declaring additional deps.. although we lack an execution context that's service local rather than unit local.
<niemeyer> hazmat: We don't declare additional deps.. the dependencies can be pre-defined, since they are always fixed
<niemeyer> hazmat: Unless there's some genetic voodoo within the formula :-)
<niemeyer> hazmat: E.g. the hooks will be pre-defined
<niemeyer> There's very cool stuff we can do around this..
<niemeyer> Hopefully before 2017
<hazmat> :-)
<_mup_> Bug #816581 was filed: Ensemble needs easy end user initial configuration <Ensemble:New> < https://launchpad.net/bugs/816581 >
<_mup_> ensemble/expose-hooks r282 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<niemeyer> jimbaker: robus-hook-exit looks nice in general, but I'm missing some of the context about why it's needed
<niemeyer> jimbaker: I've posted some comments there
<jimbaker> niemeyer, ok. i will take a look at your comments
<niemeyer> jimbaker: Feel free to bring any of them up for debate here, please
<jimbaker> niemeyer, yes, i assume you read the corresponding bug?
<niemeyer> jimbaker: I've read your merge proposal description, the code, and the test
<jimbaker> niemeyer, re the refactoring, sounds good to me, again i was following the convention in this code
<niemeyer> jimbaker: Which convention?
<jimbaker> niemeyer, so does it make sense the scenario being described, that a hook script may exit, but w/o its file descriptors being closed?
<jimbaker> niemeyer, the convention of the existing file
<niemeyer> jimbaker: Which convention, specifically?
<jimbaker> niemeyer, the usage of the nested functions cleanup_process, which i followed in doing cleanup_ended
<niemeyer> jimbaker: Ok, I see
<niemeyer> jimbaker: When introducing new logic, some reevaluation is needed to ensure the style used still makes sense
<jimbaker> niemeyer, sure, we just had that discussion earlier ;)
<niemeyer> jimbaker: It might not be a big deal with a small closure.. when you have more than a full screen of closure which doesn't really use the fact it's a closure in a good way, something isn't right
<jimbaker> niemeyer, correct neither nested function closes over variables
<niemeyer> Right, anyway.. mionr
<niemeyer> mionr
<niemeyer> miNNOr
<niemeyer> :-0
<niemeyer> jimbaker: So
<niemeyer> jimbaker: On the actual meaning of the branch
<jimbaker> niemeyer, i think it is captured succinctly in a hook like this one:
<niemeyer> jimbaker: I don't understand the scenario.. a hook exiting without fds being closed?  How's that a problem?
<jimbaker> sleep 0.05 && echo "Slept for 50ms" && sleep 1 && echo "Slept for 1s" && sleep 1000000 &
<jimbaker> some computation, some output
<jimbaker> and of course the very relevant & at the end to fork it in the background
<hazmat> jimbaker, but how is that an exceptional condition outside of its a taking a long time
<hazmat> :-)
<niemeyer> jimbaker: Ok.. sorry, I'm not trying to be difficult in any way.  How's that an issue?
<jimbaker> hazmat, niemeyer - this arises in certain formulas where we have badly behaving code
<niemeyer> jimbaker: This feels pretty normal?
<hazmat> rabbitmq specifically per the bug
<jimbaker> so in particular adam_g  brought it up
<hazmat> niemeyer, it shouldn't be normal but it exists
<niemeyer> What exists?
<jimbaker> i worked on the branch because i made the original change from exited to ended semantics
<niemeyer> hazmat: It's a problem, and it exists.. ookaaay :-)
<hazmat> and this adds robustness to how we handle hooks, we had switched in the past to get better testing output
<jimbaker> thomas herve diagnosed it
<niemeyer> jimbaker: What was the problem he diagnosed?
<niemeyer> jimbaker: and how is the branch fixing it?
<hazmat> indeed.. that was very helpful.. he pointed out the difference between processEnded vs processExited on the process protocol
<jimbaker> that child processes were not properly closing file descriptors because how they were redirecting things
<hazmat> in the context of a rabbitmq, which had some odd behavior per its start scripts
<jimbaker> hazmat, correct
<jimbaker> and apparently this not so uncommon and there's an easy fix for us
<niemeyer> jimbaker: How not closing file descriptors affect the way process ending happens?
<hazmat> niemeyer, its waiting for the file descriptors to be closed
<hazmat> vs waiting for the process to exit
<jimbaker> niemeyer, waiting on process ending means waiting for the FDs to be closed
<jimbaker> niemeyer, so this is generally the right thing to do because it means we fully capture useful stdout/stderr for logging
<jimbaker> niemeyer, i had changed the semantics from exited to ended when i standardized the logging
<jimbaker> niemeyer, as you may recall, we were doing some odd things there that ultimately was just making the tests to *usually* wait long enough for the logs to be complete
<niemeyer> jimbaker: No, it doesn't mean that in general.. if you fork a process, and wait for it to exit with normal system calls, you don't wait for FDs to be closed
<jimbaker> niemeyer, correct
<jimbaker> niemeyer, so that's the distinction between waiting on process exited vs process ended
<jimbaker> niemeyer, the twisted process protocol supports both, since they both have some nice qualities
<SpamapS> m_3: is your mongo formula in principia yet?
<niemeyer> jimbaker: Which fd is processEnded holding out on?  stdin?
<jimbaker> niemeyer, the FD is going to be stdout or stderr
<jimbaker> or both
<jimbaker> niemeyer, in this particular test hook that i just pasted here, it's stdout
<m_3> SpamapS: nope, I just got hazmat's and stripped it of replication
<m_3> SpamapS: negronjl has one inbound that I expect will be nice
<jimbaker> niemeyer, so because we capture these streams for logging, ideally we work with them
<niemeyer> jimbaker: There's some detail missing still.. if a process exits, you get a SIGPIPE
<jimbaker> niemeyer, in general this doesn't impact hook code - the process exits, it's done for the invoker. however if there's still more output, these are still written to logs
<negronjl> m_3:  I'll push it soon .. 
<niemeyer> jimbaker: and the process ended
<niemeyer> jimbaker: Your example shows a sleep going into background, and never returning
<jimbaker> niemeyer, correct - a child process just hanging around just like what we see in real cases
<jimbaker> niemeyer, who knows when it returns? 
<niemeyer> jimbaker: Hmm, ok, I think I start to see what you mean
<jimbaker> niemeyer, in any event, in that case we reap them because we don't want processes like that around indefinitely
<niemeyer> jimbaker: There's a child process, started by the hook, and we want it to continue running
<niemeyer> jimbaker: Is that the case?
<jimbaker> niemeyer, i chose 5 seconds because it's much larger than reasonable for good children, but doesn't matter in practice even w/ lots of bad children
<niemeyer> jimbaker: Uh.. lost again
<jimbaker> niemeyer, only for a little bit, just to let its logs settle
<jimbaker> the actual reaper time probably would be fine w/ something like 200 ms
<SpamapS> jimbaker: there are daemons that take a long time to start up .. but usually they don't let the parent die until they're ready for it to die.
<jimbaker> SpamapS, correct - this doesn't impact such "good" children
<jimbaker> niemeyer, so we are not trying to solve a larger problem of having time limits on hooks
<SpamapS> In rabbit's case, it seems like we should report a bug in the init script that it needs to run w/ nohup
<jimbaker> SpamapS, correct, that would be the normal simple way to get the desired behavior
<SpamapS> which turns bad children into good children by first redirecting its own fds to dev null or a log
<jimbaker> SpamapS, :)
<m_3> like long-term care insurance
<jimbaker> the analogies just pour in
<SpamapS> m_3: Great example node.js app btw..
<SpamapS> m_3: you may not have gotten much uptake on that... today is node.js day at OSCON ;)
<m_3> cool thanks man
<hazmat> jimbaker, we are and there is a bug open for that, but that has other things it needs to take into account like respecting debug hooks which will break time limits.. a standard hook timeout though needs to be quite large (large installs come to mind).
<m_3> didn't know if I was going into too much detail
<m_3> lots of zzzz's I think
<jimbaker> hazmat, correct, a much larger problem
<SpamapS> m_3: The log of the session may prove useful to someone. :)
<m_3> wish there was a way to get the ajaxterm session too
<niemeyer> jimbaker: Ok, I think I understand, and the approach looks good
<m_3> should create a screencast I guess
<niemeyer> jimbaker: It just needs proper documentation
<jimbaker> hazmat, i think such a solution should allow for measuring *useful* progress back to the agent
<niemeyer> jimbaker: I'll read the handling of processEnded, just to make sure I'm indeed getting what happens there
<jimbaker> niemeyer, cool, i will work on that
<niemeyer> jimbaker: Thanks for the patience explaining it
<jimbaker> niemeyer, np
<jimbaker> SpamapS, speaking of node.js, seeing any golang interest at oscon?
<SpamapS> jimbaker: I'm not there yet. :-/
<SpamapS> trying to keep up with keynotes and goings on so I won't be lost when I arrive on Thursday
<jimbaker> SpamapS, cool, makes sense
<niemeyer> jimbaker: Are you able to reproduce the problem easily?
<adam_g> jimbaker: google people led a 3.5 hour go tutorial this morning
<adam_g> niemeyer: the problem is easily reproduced deploying lp:principia/rabbitmq-server
<SpamapS> Hey guys, I could use feedback on this.. please view w/ the notes .. http://spamaps.org/files/ensemble-oscon-2011.odp
<niemeyer> adam_g: Yeah, by easily I was wondering about a trivial test case
<niemeyer> I suspect the real issue here is that Python is eating SIGCHLD
<niemeyer> which is the standard way by which a parent gets to know it child has died
<jimbaker> niemeyer, pretty certain that the ProcessProtocol stuff is quite robust with respect to that
<jcastro> SpamapS: too many lolcats, that's like 2 years ago
<jimbaker> niemeyer, i can readily see the child process not completing its writes to output before process exit, simply by looping test_invoker
<jimbaker> niemeyer, on this laptop maybe occurs 50% of time (don't recall specifics!) if i don't have the yields on process ended
<jimbaker> niemeyer, so that's often enough that one really doesn't need to loop ;)
<niemeyer> jimbaker: I think I get it.. I'm wondering mostly about this sentence from therve now:
<niemeyer> """I don't think that bug is related to what's done during the install hook, fwiw. It happened to me with an empty install script."""
<niemeyer> jimbaker: There's no "sleep" in this case, as per your example
<niemeyer> jimbaker: So why would it remain un-ended
<ahs3> probably a dumb n00b question...is there a way to have ensemble install a service on one or more existing machines (real ones, not EC2 instances)?  i'm not finding it in the docs, if there is...
<jimbaker> niemeyer, i'm reading that a bit differently - there's some actual hook being executed, just not an install one
<jimbaker> or at least just a trivial "empty" install one
<jimbaker> niemeyer, so in particular i'm looking at reply #9 where we start to move into actual diagnosis
<niemeyer> jimbaker: Ah, interesting, could be
<niemeyer> jimbaker: That'd make more sense
<hazmat> ahs3, that's a good question.. not at the moment re a standalone machine, there is  work being done today for physical machine integration  is in the context of using something like orchestra (cobbler)
<jimbaker> niemeyer, yeah, i'd be very concerned if a empty hook would act like an infinite sleep ;)
<hazmat> for setting up ensemble deployment with a physical machine data center
<niemeyer> jimbaker: Exactly
<jimbaker> niemeyer, but again we would fix that in the other one, bug 705433
<_mup_> Bug #705433: Hooks need to have an enforceable timeout. <hooks> <Ensemble:New> < https://launchpad.net/bugs/705433 >
<niemeyer>     def maybeCallProcessEnded(self):
<niemeyer>         # we don't call ProcessProtocol.processEnded until:
<niemeyer>         #  the child has terminated, AND
<niemeyer>         #  all writers have indicated an error status, AND
<niemeyer>         #  all readers have indicated EOF
<niemeyer>         # This insures that we've gathered all output from the process.
<hazmat> jimbaker, well an empty hook and a timeout are different, the question is really do want to use processExited instead
<ahs3> hazmat: hrm.  thx.  i thought that might be the case.  what i would especially like is something that assumes a machine is already up, running and available, and then just puts the service on it.
<jimbaker> hazmat, to clarify: this branch now uses processExited
<hazmat> jimbaker, indeed it does both
<niemeyer> jimbaker: Beautiful.. all looks great
<jimbaker> hazmat, the only thing it uses processEnded for is to do the loseConnection handshake. unless the time between processExited and processEnded is larger than a ludicrous amount (5 s), in which case it simply kills the process with loseConnection
<hazmat> jimbaker, yeah.. the branch looked good to me
<niemeyer> jimbaker: Just a sentence or two highlighting the distinction between processEnded and Exited, given that these names are entirely unhelpful
<jimbaker> i expect closing < 50 ms. 5 seconds ensures the probability of delays in normal closing to be exceedingly low, which seems a reasonable cost
<niemeyer> jimbaker: This, specifically, will bite us when we read it again:
<niemeyer> +        # The corresponding process has ended, so output streams have
<niemeyer> jimbaker: The process has already ended  by the time processExited was called
<niemeyer> jimbaker: (in unix/non-twisted lingo)
<hazmat> jimbaker, the long inline functions seem a little strange stylistically, but thats minor
<jimbaker> hazmat, yeah, no worries, they are moving to our established convention of _ methods since they are not actual closures (in the sense of actually closing over a non-empty set of variables)
<jimbaker> hazmat, i was simply following the convention in that particular file, but it
<jimbaker> represents somewhat earlier code for us
<niemeyer> jimbaker: The convention is still the same.. we often put short local snippets in-context, but for 40+ lines the convention changes
<hazmat> indeed that was so last year ;-)
<niemeyer> as hazmat says, no big deal.. that's what reviews are for
<jimbaker> niemeyer, oh sure, i think we all understand what's what
<niemeyer> jimbaker: Cool, thanks again for the explanation
<niemeyer> jimbaker: Had no idea about the processEnded concept used by Twisted
<jimbaker> niemeyer, until this came up, neither did i, that's for certain
<niemeyer> hazmat: have you looked over the branch?
<jimbaker> but it makes perfect sense in the overall context of daemonization
<hazmat> niemeyer, i have
<niemeyer> jimbaker: Not sure it does.. the only use case I've seen so far was a bug. ;-D
<hazmat> i think i looked at it when i implemented it, and used exited for the initial implementation, but i definitely understood and agreed why it was switched, and this additional robustness around usage looks good to me
<niemeyer> hazmat: Cool, would you mind to put your concerns in the MP if you have any, or +1 it?
<hazmat> niemeyer, sure
<niemeyer> hazmat: Thanks
<niemeyer> hazmat: This MP is empty: https://code.launchpad.net/~hazmat/ensemble/security-connection/+merge/68621
<niemeyer> hazmat: Is it missing a push?
<hazmat> perhaps.. let me check
<hazmat> i think that was one of the borked branches
<hazmat> niemeyer, it says its up to date revno 285
<hazmat> niemeyer, to answer the mp question lp is on crack for that branch
<_mup_> Bug #816621 was filed: Ensemble doesn't appear to set up a complete environment while running the installation scripts and hooks <Ensemble:New> < https://launchpad.net/bugs/816621 >
<niemeyer> hazmat: Cool
<niemeyer> I'll step out for a coffee break.. 
<hazmat> negronjl, re cloud foundry bug above, did you  have ruby's eventmachine installed via package locally?
<hazmat> negronjl, i'm wondering if the issue is dep not specified in packaging
<SpamapS> jcastro: I need a new meme. :)
<hazmat> negronjl, the package software is depending on eventmachine, and its not being installed via package per the hook logs, unless there's a private copy to cloudfoundry, it definitely seems like a missing dep issue.
<SpamapS> jcastro: should I go with fuuuuuuuuuuu instead?
<jcastro> SpamapS: you started with a terminator-skynet meme
<jcastro> SpamapS: "ensemble with me if you want to live."
<jcastro> SpamapS: deployment day = judgement day. I mean, there's so many places to go with this
<SpamapS> jcastro: my reason for lolcats was to say that ensemble was invented primarily to help process all of the pictures of cats incoming from smart phone users.. but it never materialized
<SpamapS> jcastro: This is true.. and open source can be John Connor's rebels. ;)
<jcastro> SpamapS: use the fuuuuuu-maker thing from reddit
<jcastro> wait, no, that's a bad idea, that will just lead to hilarity
<niemeyer> hazmat: The contributor's page is up-to-date already
<niemeyer> s/'//
<hazmat> niemeyer, url?
<niemeyer> hazmat: http://www.canonical.com/contributors
<hazmat> niemeyer, cool, thanks
<SpamapS> jcastro: I want the last 20 minutes back... distractions... http://spamaps.org/files/hadoop-rage.png
<hazmat> :-)
<jcastro> I new you wouldn't let me down
<jcastro> knew even
<SpamapS> jcastro: Not sure thats hilarity.. its a little too commercial for me. ;)
<negronjl> hazmat:  All the deps are satisfied.  If you run the script manually, everything works just fine.
<negronjl> hazmat:  It only breaks when run via ensemble
<hazmat> SpamapS, slides look nice
<niemeyer> hazmat: lp is seriously wedged on that branch 
<niemeyer> % bzr branch lp:~hazmat/ensemble/security-connection
<niemeyer> bzr: ERROR: Not a branch: "/home/kapil/canonical/ensemble/mine/security/.bzr/pipes/security-connection/".
<niemeyer> hazmat: !!!
<niemeyer> SpamapS: Good stuff in the slides indeed
<hazmat> niemeyer, ugh
<hazmat> niemeyer, locally it looks fine.. trying with push --overwrite
<hazmat> yeah.. still nothing
<niemeyer> hazmat: Trying branching to the side, maybe?
<niemeyer> hazmat: Worth saving the whole tree somewhere else first!
<niemeyer> hazmat: To avoid losing any work
<hazmat> niemeyer, its the middle branch of a pipeline already, additional branches have pushed succesfully
<_mup_> ensemble/trunk-merge r273 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<hazmat> niemeyer, redid the merge proposal with a copy of the branch
<niemeyer> hazmat: Phew, glad it worked
<niemeyer> That was weird
<kirkland> m_3: what's your LP id?
<kirkland> m_3: https://launchpad.net/~mark-mims ?
<kirkland> m_3: okay, I'm sufficiently convinced that's you :-P
<m_3> kirkland: yes, ~mark-mims
<RoAkSoAx> lol
 * RoAkSoAx once again remembers the pains of having a nick name different for a LP id... :)
<kirkland> RoAkSoAx: for the love, yes, standards!
<RoAkSoAx> kirkland: tried to change my nick to lp id but it was simply better to keep it as is cause nobody knew me with the new nicknam,e :)
<niemeyer> hazmat is the review queue lord
<hazmat> niemeyer, slave? ;-)
<niemeyer> hazmat: Nah, I'm the slave
<niemeyer> :-)
<lynxman> hey guys, just wanted to share the news
<lynxman> Ensemble is already in Macports
<lynxman> lynxman@kreuzberg:~ $ sudo port search ensemble
<lynxman> ensemble @0.5 (python, net) Ensemble is a next generation service orchestration framework.
<lynxman> err false alarm, I'm too sleepy :)
<lynxman> sorry
<lynxman> :D
<SpamapS> well, cool anyway ;)
<m_3> +1
<niemeyer> lynxman: neat :-)
<lynxman> thought the package was just added to the repo, as soon as it's there I'll let you know :)
<niemeyer> jimbaker: Finally got to review your debug-log-relations branch
<niemeyer> jimbaker: Some comments, but good stuff overall
<niemeyer> jimbaker: Sorry for the delay
<daker> hazmat, sorry to disturb you, on this page http://www.canonical.com/contributors there is no mention of "ensemble"
<niemeyer> daker: Indeed.. feel free to send it to kapil himself
<niemeyer> daker: In addition to the email there
<daker> niemeyer, i send what ? don't tell i need to download the pdf, fill it with my name, then scan it.
<niemeyer> daker: Ok, I won't tell it.. :-)
<hazmat> daker, afaik, that's what needs to be done for contribution to any canonical sponsored software project, i guess a photo would suffice in lieu of a scan
<daker> that sucks :/
<niemeyer> daker: Sorry for the boilerplate.. note this is somewhat normal on contributions 
<SpamapS> Heh.. so.. I'm preparing to upload trunk to Ubuntu .. and I've started the process of running the test suite when the package builds.
<SpamapS> It doesn't fail the build..
<SpamapS> But it will record the results in /usr/share/doc/ensemble/test-results.txt ... so we can start the process of fixing bugs where tests fail on the buildd.
<_mup_> Bug #816736 was filed: debian dir gets in the way of packaging and is no longer necessary <Ensemble:New> < https://launchpad.net/bugs/816736 >
#ubuntu-ensemble 2011-07-27
<_mup_> ensemble/robust-hook-exit r290 committed by jim.baker@canonical.com
<_mup_> Refactoring, doc strings, and better comments wrt review
<_mup_> ensemble/debug-log-relation-settings-changes r274 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<kim0> Morning all
<jamespage> morning
<jamespage> what do I need todo to get the jenkins and jenkins-slave formulas into principia? (see bug 793735)
<_mup_> Bug #793735: import jenkins formula <new-formula> <Ensemble Formulas:In Progress> < https://launchpad.net/bugs/793735 >
<jamespage> SpamapS: still around?
<SpamapS> jamespage: insomnia seems to have caught me tonight. whats up?
<jamespage> SpamapS: hey - I wanted to move forwards with the jenkins* formulas - wondered what needed to be done to get them accepted into principia?
<SpamapS> Ok, I went ahead and approved you for ensemble-composers
<SpamapS> bzr push lp:principia/jenkins will work.. but bzr push lp:principia/jenkins-slave will run into the dreaded "no package exists" ug..
<jamespage> SpamapS: thanks - I fake something in a PPA to work around that :-)
<jamespage> cheers
<SpamapS> sweeeet
<jamespage> SpamapS: both branches now pushed...
<SpamapS> jamespage: fantastic
<jamespage> I'll prob do some further work on them when jenkins lands in Oneiric
<SpamapS> I saw that you are just blocked on NEW! :)
<jamespage> yep
<jamespage> 6  packages pending and then its in
<adam_g> SpamapS: any specific reason why default_storage_engine is set to innodb on install of mysql via the formula?
<SpamapS> adam_g: because MyISAM is a piece of S*** that should die an unholy death. :)
<SpamapS> Err, I'll rephrase ina positive light
<SpamapS> Because every time an alter table converts to InnoDB from MyISAM, an angel gets his wings.
<adam_g> hehe
<adam_g> wondering if it might be better to install defaults via install hooks, and let users tweak via their formula configs
<SpamapS> Yeah its probably a good idea to just make it the default
<SpamapS> I did it at first when I was doing the master/slave stuff to make the snapshot simpler.
<adam_g> ah
<SpamapS> wtf are you doing awake?
 * SpamapS should be sleeping too.. :-P
<niemeyer> Good morning all
<wrtp> hiya
<niemeyer> wrtp: Hey!
<wrtp> niemeyer: i was looking at laptops to run ubuntu on. any good recommendations?
<wrtp> (i saw the canonical recommended laptop list)
<niemeyer> wrtp: I like the thinkpad series, and it's fairly common within Canonical
<wrtp> i've still got an old thinkpad which i really liked
<niemeyer> wrtp: Got the T410 at the moment
<wrtp> (T21 possibly)
<wrtp> do they still have three mouse buttons and a nipple?
<niemeyer> wrtp: That's the second one.. my last one lasted 4+ years
<wrtp> mine *should* be still working except the display went dodgy
<niemeyer> wrtp: The model I use have both the nipple and the trackpad
<niemeyer> wrtp: I tend to pop the nipple out
<wrtp> which is a pity. i ran plan 9 on it for years.
<niemeyer> (context is everything! ;-)
<wrtp> i like the nipple much better than the trackpad. good for chording.
<wrtp> :-)
<wrtp> i liked the fact that the display was 1400x1200
<wrtp> but i doubt i'd get one similar now
<niemeyer> Yeah, they're pretty good laptops overall
<wrtp> battery life?
<niemeyer> wrtp: I'm pretty sure they still exist
<niemeyer> wrtp: Long.. still get 5h+ nowadays
<niemeyer> wrtp: Got an SSD as well
<wrtp> not bad. and what about the X series? are they much smaller?
<wrtp> SSD only? or SSD + hard drive?
<niemeyer> wrtp: SSD only
<niemeyer> wrtp: They are generally smaller, IIRC
<wrtp> i bet that makes things fast.
<niemeyer> wrtp: Like 13" or less
<niemeyer> wrtp: It does.. boot times are unbelievable nowadays
<wrtp> ok. i think T series it is
<jcastro> wrtp: the T410's and 420's seems to be popular at Canonical
<wrtp> jcastro: thanks. BTW what's the difference between T410 and T420? the levovo web site does not seem to want to talk about the T410
<jcastro> wrtp: the 420 replaces the 410
<wrtp> oh, that's easy then :-)
<wrtp> i can't believe that displays have got *smaller* since i last bought a thinkpad in 2003!
<kim0> SpamapS: hmm we're being compared to cloudformations .. http://cloud.ubuntu.com/2011/06/so-what-is-ensemble-anyway/#comment-1635 
<kim0> I think cloudformations only launches a collection of machines and that's it .. it doesn't really manage them afterwards ?!
<jcastro> wrtp: they make "s" series with higher res screens, they are hard to find and more expensive, search for 410s and 420s.
<jcastro> kim0: wow, some of the comments in that article, heh
<niemeyer> wrtp: Yeah, I'm not a big fan of the widescreen laptops either..
<niemeyer> wrtp: But I guess everyone else has won. ;-)
<hallyn> niemeyer: :(
<wrtp> jcastro: s series are still only 900 vertical pixels. my old one was 1050...
<wrtp> niemeyer: yeah. boo hiss. i like my vertical space. still i'll survive!
<jcastro> yeah but it's like they all stopped making 4:3
<hazmat> wrtp, the x220 is doing pretty nice by me, although i think there are some sandy bridge graphics updates that need onieric, i'm using classic no effects.. the x220 can take two ssd drives (msata, and regular), although the regular has to be a 7mm device. speed and battery life are both very nice
<hazmat> the ips screen rocks as well
<hazmat> they also sell an optional battery slice for the x220, which can extend battery life to the 20+ hr mark in addition to a 9-cell
<hazmat> the 's' identifier on a series identifies switchable/discrete graphics afaik.. they do sell high quality ips screens on some models without the 's'
<jcastro> hazmat: yeah I have an X220 and it's pretty awesome
<hazmat> niemeyer, ping
<niemeyer> hazmat: Yo
<hazmat> i took a step back and lookedat what's needed to finish up security. there's a question, i'd like to get your feedback on.. do you have time for a voice chat?
<niemeyer> hazmat: Not right now, I'm about to leave for lunch
<niemeyer> hazmat: and some friends just arrived to have lunch with us (which is why it took a moment, sorry)
<niemeyer> hazmat: But will be happy to talk after lunch
<hazmat> niemeyer, nevermind, there really is only way to look at the problem.. i think i've got it worked out, thanks and enjoy
<hazmat> i'll paste my internal dialogue later ;-)
<niemeyer> hazmat: Cool.. I've started looking at your new security branch already, btw
<niemeyer> hazmat: It worked this time
<hazmat> niemeyer, re the question.. https://pastebin.canonical.com/50336/
<hazmat> niemeyer, i'm going to add to the add_machine_state, add_unit_state to create otp principals and store the token directly on the relevant states
<niemeyer> hazmat: Cool, I'll put some thinking on that after lunch.. now I really need to step out!
<niemeyer> biab
<_mup_> ensemble/expose-provider-ec2 r301 committed by jim.baker@canonical.com
<_mup_> Assign the provider machine its machine id in the provisioning agent to simplify provider APIs
<SpamapS> negronjl: ping?
<robbiew> m_3: ping
<_mup_> ensemble/expose-provision-machines r292 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/expose-provision-machines-reexpose r301 committed by jim.baker@canonical.com
<_mup_> Merged upstream branch expose-provision-machines
<negronjl> SpamapS: pong
<SpamapS> negronjl: wondering about your gem issue.
<negronjl> SpamapS: testing a workaround now.  I'll let you know
<negronjl> SpamapS:  successfully worked around the issue by packaging bundler into a .deb ( so no gem install anymore ).
<negronjl> SpamapS:  however the issue with ensemble still persists.  we just found a fix around the issue.
<_mup_> ensemble/expose-provider-ec2 r302 committed by jim.baker@canonical.com
<_mup_> Remove direct passing of machine_id for EC2 provider implementation and fix incorrect usage of Instance instead of ProviderMachine
<SpamapS> negronjl: heh.. it really is a bug in gem... it shouldn't expect a login environment.
<robbiew> SpamapS: hey...do you have the link to that google doc we had in dublin, listing all the projects we wanted to work with, e.g. HotandHairy ;)
<SpamapS> robbiew: I think jcastro and m_3 have it
<robbiew> jcastro: ^^...can you share that with me?
<jcastro> on it
<wrtp> hazmat: thanks, that's useful
<jcastro> robbiew: doc title is "Ensemble & Principia"
<jcastro> I don't own the doc so it won't let me reshare it with you
<robbiew> jcastro: thnx
<jcastro> I can jet you a copy via mail though if you just want to read it
<jcastro> also, all the hot and hairy ones I filed, and have a "hot" tag in lp
<robbiew> jcastro: who owns the doc, sabdfl?
<m_3> robbiew: hey
<jcastro> robbiew: jono
<jcastro> robbiew: he likely owns all the other messaging ones too, I'll send him a note to add you as an owner to all of them
<robbiew> jcastro: ffs
<robbiew> jcastro: send me the link
<robbiew> I should be able to view it...not showing up in a search though
<robbiew> m_3: nevermind...jcastro responded ;)
<m_3> cool... yeah, I'm not the owner either
<jamespage> jcastro: ep-lite rocks BTW
<jamespage> sooo much lighter than etherpad - it even runs in a t1.micro
<m_3> jamespage: are you using that standalone or as part of some other "conference" stack?
<jamespage> m_3: so we setup etherpad for UDS - but it was hard and the package is pretty ugly
<jamespage> I just tried out ep-lite - http://ec2-46-137-10-1.eu-west-1.compute.amazonaws.com:9001/p/pad-with-daviey
<jamespage> really small footprint compared to etherpad
<jamespage> really small
<m_3> ep-lite should just require node and npm
<jamespage> m_3: yep - that was pretty much it
<m_3> awesome
<jamespage> npm pulled the rest of the deps
<jamespage> it can backend to sqlite or mysql
<m_3> did you do a formula?  that's on my list for node apps
<SpamapS> jamespage: what services does it use for data storage?
<SpamapS> oh
<SpamapS> should read the whole conversation before joining
<jamespage> m_3: not yet
<jamespage> m_3: would you mind if I had a go at one?
<m_3> jamespage: sure man... go ahead
<jamespage> m_3: ta
<m_3> jamespage: I've got some node/npm boilerplate you're welcome to
<jamespage> m_3: point me at it :-)
<m_3> lemme get it into lp (in github at the moment)
<jcastro> jamespage: oooh, very nice.
<jcastro> jamespage: yeah the page made a bunch of performance claims, but I wasn't ready to believe it until someone tried it. Great to hear it's slimmed down though
<jcastro> it'd be a heck of a nice formula to have handy for conferences, LUGs, etc.
<jcastro> and would also be a nice demo one too, since you could just fire it up, give people  the url, and then people could play with it right there
<m_3> jamespage: lp:~mark-mims/+junk/nodeapp
<jamespage> m_3: ta
<niemeyer> Yo!
<m_3> niemeyer: hey
<niemeyer> m_3: Hey Mark
<niemeyer> hazmat: SO, in the paste you provided earlier, it's not clear to me what you mean with "OTP serialization directly to the relevant nodes"
<hazmat> niemeyer, basically the otp principal.. creates a permanent named principal with credentials stored to the otp node, along with acl using the otp credentials, the otp principal can be serialized to give a path:otp_user:otp_password info that can be utilized to retrieve the permanent credentials
<hazmat> niemeyer, in particular i'm saying that machine_state and unit states would have that otp serialization directly as part of their contents
<niemeyer> hazmat: That's the bit I don't get
<hazmat> this allows say the  provisioning agent to access it from the machine state and then launch the machine agent with it
<niemeyer> hazmat: Wasn't the plan to have a location where these OTP details are managed?
<niemeyer> hazmat: It sounds like you're saying that OTP will be spread around, but I'm not sure if that's the case.  Is it?
<hazmat> niemeyer, right but the otp credentials themselves need to be passed between multiple processes, i thought it was just a one time transfer from the launching process to the launched process. but really its needed much earlier (at the cli time) so we can associate the proper acl to named principal on created nodes
<niemeyer> hazmat: The OTP principle and the timing sounds fine.. I'm wondering about the organization of nodes
<hazmat> niemeyer, the permanent principal that the otp stores is stored in only one place, the otp serialization (access to that permanent principal) is stored on the relevant domain object for an agent, so that the process launching the domain object can access it to pass along to the domain agent
<niemeyer> hazmat: Hmm.. it sounded to me like having a location where these exchanges happen would be cleaner
<niemeyer> hazmat: Otherwise we can't really tell at any given point in time which OTPs are unclaimed, for instance
<niemeyer> hazmat: Without having to dig through node content
<hazmat> niemeyer, the otp is gone after use, and the launching process should remove it from the state when launching the domain agent
<niemeyer> hazmat: Yeah, the above comment take that into account
<niemeyer> takes
<hazmat> niemeyer, on phone with isp.. trying to resolve internet issues.. bbiam
 * hazmat is over his isp.. time for a new one.. 
<hazmat> Nicke, the unclaimed otps are those that still exist, the otp serialization on the domain state, is removed prior to the launch of the domain agent
<hazmat> their is a defined location for the exchange the otp node itself, but that has an acl protection to only allow access to someone with the otp credentials
<niemeyer> hazmat: Ok, so maybe I misunderstood what you mean there
<niemeyer> hazmat: This makes it sound like otherwise, for instance: " the otp serialization (access to that permanent principal) is stored on the relevant domain object for an agent"
<hazmat> the otp credentials are stored transiently on the domain state, so we can allow for the indirection nesc to reference the named principal during domain node creation, and so the process that will create the corresponding domain state agent to access it to pass along
<hazmat> i originally was going to forgo this and just have the otp created by the launching process, but its needed much earlier by the cli to associate the acls
<hazmat> to domain objects created by the cli
<niemeyer> hazmat: Sorry.. I don't understand stil
<niemeyer> l
<niemeyer> hazmat: What I mean is this..
<niemeyer> There's a machine node
<niemeyer> /machines/machine-0
 * hazmat nods
<niemeyer> This node is protected by an ACL so that it can only be read by relevant parties
<hazmat> and a corresponding /otp/otp-xyz node
<niemeyer> Yes..
<niemeyer> When that gets put in place, which protection is /machines/machine-0 taking, and what's inside /otp/otp-xyz?
<niemeyer> hazmat: Lacking some interactivity.. maybe we should video?
<hazmat> niemeyer, i disconnected again..  could you repeat last line?
<niemeyer> <niemeyer> When that gets put in place, which protection is /machines/machine-0 taking, and what's inside /otp/otp-xyz?
<niemeyer> <niemeyer> hazmat: Lacking some interactivity.. maybe we should video?
<hazmat> niemeyer, these are my last comments not sure what got missed https://pastebin.canonical.com/50354/
<hazmat> niemeyer, sure
<niemeyer> hazmat: Ok, I think we're on the same page
<niemeyer> hazmat: The point I was missing was really about the "the otp serialization (access to that permanent principal) is stored on the relevant domain object for an agent" comment
<niemeyer> hazmat: It felt like the domain object itself (e.g. /machines/machine-0) was protected by the OTP, and the real password was within it
<hazmat> niemeyer, okay.. so that get's removed effectively when the domain agent is launched.
<niemeyer> hazmat: But the description in the paste clearly states otherwise, so it's all good
<hazmat> niemeyer, cool
<niemeyer> hazmat: What gets removed when the domain agent is launched?
<hazmat> niemeyer, the otp serialization (otp_node_path:otp_user:otp_pass) on the domain state
<niemeyer> hazmat: Ok, we're still out of sync apparently.. you mean the domain object (/machines/machine-0) is protected by the OTP?
<hazmat> niemeyer, no.. but the domain object has the otp serialization in it, after the launch of the domain agent, that data is stale
<niemeyer> hazmat: Why does it need it?
<hazmat> niemeyer, the domain agent doesn't need the otp serialization in the state, but the process that launches the domain agent does so it can pass it to the domain agent
<niemeyer> hazmat: Ahh, I think I get it, ok
<niemeyer> hazmat: Nice, makes sense
<niemeyer> hazmat: That was the confusion.. it sounded like the OTP was *protecting* the domain node
<hazmat> niemeyer, yeah.. after i thought it about their really weren't any options to work around it.. we need the named principals referenceable from the cli at state creation time, by the time the system is getting around to launching the domain agent, its too late for most of the acl grants
<hazmat> niemeyer, ah.. yeah. the otp is serialized in the domain node, and only protects the named principal in the otp node
<hazmat> niemeyer, so i took a step back yesterday to try and figure out how many more branches and work are needed to get both identity and security policies activated.. being realistic its probably about 1.5 weeks.
<niemeyer> hazmat: That's fine
<niemeyer> hazmat: Things are looking good.. and I won't do anything else today before killing that review queue :)
<hazmat> niemeyer, cool
<niemeyer> hazmat: First review on https://code.launchpad.net/~hazmat/ensemble/security-connection-redux/+merge/69369 delivered
<niemeyer> hazmat: We'll probably need a voice conversation on some of the topics there
<_mup_> ensemble/expose-provider-ec2 r301 committed by jim.baker@canonical.com
<_mup_> Recover from bzr error
<_mup_> ensemble/expose-provider-ec2 r302 committed by jim.baker@canonical.com
<_mup_> Merged trunk and resolved conflicts.
<jimbaker> So much for that attempt... argh, the only thing worse than a complicated conflict is bzr breaking down
<jimbaker> (in addition to the conflict!)
<niemeyer> jimbaker: How's it breaking down?
<jimbaker> niemeyer, lost some of its metadata
<niemeyer> Huh
<jimbaker> i can recover the diff, but this is the context of merging upstream and resolving all the conflicts that was generated with the recent provider refactoring
<niemeyer> hazmat: ping
<niemeyer> hazmat: Not sure if your connection is flaky still, or if you're off
<hazmat> niemeyer, back, its been flaky still, should have a new isp on monday though
<niemeyer> hazmat: Cool.. not sure if you got this:
<niemeyer> <niemeyer> hazmat: First review on https://code.launchpad.net/~hazmat/ensemble/security-connection-redux/+merge/69369 delivered
<niemeyer> <niemeyer> hazmat: We'll probably need a voice conversation on some of the topics there
<hazmat> niemeyer, nope i didn't thanks for the replay
<niemeyer> hazmat: Oh my..
<jimbaker> i hate to do this, but i'm going to break the commit into two pieces this time (the merge AND the conflicts)
<niemeyer> hazmat: I think I messed up something
<niemeyer> hazmat: Feels like I already reviewed security-policy-rules
<niemeyer> hazmat: Hmm.. the connection-redux branch probably had it embedded
<hazmat> niemeyer, yeah.. there are a pre-requisites on all the merge proposals
<niemeyer> hazmat: Ok.. I'll copy over the review..
<niemeyer> hazmat: Feel free to address specific points on the respective branches.
<niemeyer> hazmat: This way you can merge stuff faster
<_mup_> ensemble/expose-provider-ec2 r301 committed by jim.baker@canonical.com
<_mup_> Merged upstream expose-provision-machines-reexpose (conflicts later)
<niemeyer> hazmat: Alright, I think I fixed the mess
<hazmat> niemeyer, your point #8 on security-connection bears some thinking about
<hazmat> that's definitely got some implications for the rest of the approach
<niemeyer> hazmat: security-policy-rules is the one that requires conversations
<hazmat> niemeyer, yup
<niemeyer> hazmat: security-connection-redux is back in review with a +1
<hazmat> niemeyer, cool
<niemeyer> hazmat: Only minor points there
<hazmat> niemeyer, re why #2 on connection, the reason for the mixin is to ease testing, ssh conn requires ability to ssh into local host non-interactively.
<hazmat> to test based on usage
<niemeyer> hazmat: It can be a base then, and live within the same package
<niemeyer> hazmat: Multiple-inheritance, separate packages, mixins.. feels like a lot for overloading a method
<hazmat> niemeyer, sounds good, i'll probably end up with a second mixin to have policies active b4 we have transport level security for intra-environment communications
<hazmat> er. second connection class using the mixin
<niemeyer> hazmat: You don't need to have a second class using the mixin.. Just use that same base
<hazmat> niemeyer, ah.. base, right, yeah.. that makes more sense
<niemeyer> hazmat: Yeah ZK => SZK => SSHSZK
<hazmat> SZK name is a bit of misnomer.. since its not transport level.. if you have an idea on rename.. i'm all ears. i was thinking PolicyConnection.. but that sounds strange as well.
<_mup_> ensemble/expose-provider-ec2 r302 committed by jim.baker@canonical.com
<_mup_> Resolved text conflicts, but some merge problems remain
<niemeyer> hazmat: That wasn't a naming suggestion :-)
<niemeyer> hazmat: Just illustrating the inhertiance
<niemeyer> hazmat: Hmm
<hazmat> niemeyer, understood, just making a naming request additional to that
<niemeyer> hazmat: PolicyConnection?
<hazmat> niemeyer, yeah.. i guess that makes as much sense.. it just felt strange
<niemeyer> hazmat: RuledConnection :-)
<hazmat> TheOneRing with s/connect/wear, doc string.. "And in the darkness bind them"  ;-)
<hazmat> yeah.. policyconn sounds good
<_mup_> ensemble/states-with-principals r298 committed by kapil.thangavelu@canonical.com
<_mup_> a new method to OTPPrincipal, that creates a principal, adds it to token db, and returns the otp serialized data in one shot, also a general reuse test function to enable otp cleanup from tests that may encounter them.
<_mup_> ensemble/expose-provider-ec2 r303 committed by jim.baker@canonical.com
<_mup_> Added missing machine_id settings to mocks in test_launch
<hazmat> niemeyer, the only place we really need topoloy data for things that might not be in the topology is the unit, since we need to retrieve the service
<hazmat> niemeyer, we could remedy by just putting the service in the initial data for the unit node
<hazmat> although that would change the rule interface to make them content based..
<_mup_> ensemble/expose-provider-ec2 r304 committed by jim.baker@canonical.com
<_mup_> Fix mock tests around the merged ProviderMachine
<hazmat> perhaps a slippery slope
<niemeyer> hazmat: Not sure I get what you mean.. there are more cases of topology being used for things that might not be in the topology in that branch
<hazmat> niemeyer, most/all of the other use is for relations
<niemeyer> hazmat: Yeah.. but e..g.
<niemeyer> hazmat: How can you assign permissions to /relations/relation-10 based on the users of this relation, given that the id was just created?
<hazmat> niemeyer, yeah.. the relation top level node also needs some consideration
<niemeyer> hazmat: The path doesn't even include the sequence number!
<hazmat> niemeyer, it could also be addressed the same way, using content
<hazmat> as part of the rule interface
<niemeyer> hazmat: Using content in what sense?>
<hazmat> yeah.. as i noted in the mp.. i regard that branch as the most incomplete of the bunch.. its definitely going to need more work.. i just wanted to get more discussion ont he approach
<hazmat> niemeyer, node data 
<hazmat> niemeyer, for a relation including the services and their roles.. for the unit node including its service
<hazmat> the rule could then dispatch both on path and content, which as you correctly point out is needed for any sequence node acl to be contextually aware
<niemeyer> hazmat: Do we have that data there today?
<hazmat> niemeyer, not in the relation-id node.. we do have them in the role container node below
<hazmat> most of the domain object nodes are empty nodes at create time
<hazmat> machines, units, relations, etc.
<niemeyer> hazmat: It feels awkward to be sending data to zookeeper to fix a deficiency in our API
<hazmat> niemeyer, its effectively just identity storing identity information on the relevant domain nodes.
<niemeyer> hazmat: It's storing data in zookeeper in a place that is not necessary, besides for fixing a deficiency in our API
<niemeyer> hazmat: It's indeed a slippery slope..  I'd like to take a step back and ponder for a while about possible approaches that avoid that kind of cross-dependency entirely
<hazmat> niemeyer, well.. we can choose not to store it ;-)
<niemeyer> hazmat: LOL
<_mup_> ensemble/expose-provider-ec2 r305 committed by jim.baker@canonical.com
<_mup_> Fixed remaining mocks in ec2 provider tests
<hazmat> one of the other nice things i like about the policy rule approach, its very easy to apply the acls to an entire tree, or diff a tree and see if there acls missing/etc, or do an upgrade on them en mass. its just a tree walk reusing the same rule interface
<hazmat> niemeyer, in some sense the api we have right now is the strange, one we have domain objects with no identity information, because we store identity in a secondary index, but the object in isolation lacks any context.
<hazmat> i'll think about alternate approaches and we can revisit, i should continue on with the identity work
<niemeyer> hazmat: That's not entirely true
<niemeyer> hazmat: The object owns its identity, and there is context about what is there in most contexts
<niemeyer> hazmat: The "secondary index" is not an index, but a relationship description
<niemeyer> hazmat: that's also interesting, because what is being looked for is not information about the domain object either, but about its relations
<niemeyer> Well, for units that's not true
<_mup_> ensemble/expose-provider-ec2 r306 committed by jim.baker@canonical.com
<_mup_> Pass through machine data to DummyLaunchMachine.start_machine to fix common tests
<niemeyer> hazmat: Btw, _very_ nice break up of branches
<niemeyer> hazmat: Thanks a ton for that
<niemeyer> It'd be crazy otherwise
<hazmat> niemeyer, np... hoping to continue that with the rest of the work
<niemeyer> hazmat: I'm getting to the end of the pile.. it feels like that issue we're both brainstorming on is the only critical..
<niemeyer> hazmat: Let's think some more on that and tomorrow we can catch up for a decision
<hazmat> niemeyer, yeah.. i'm going to think about it some more, we should reconnect about it tomorrow.
<niemeyer> hazmat: +1 :)
<hazmat> niemeyer, re identity its not true for units or relations nodes, they only have identity with the topology, and those are both the ones the topology gets consulted by the rules for.
<niemeyer> hazmat: The _identity_ is the node name..
<hazmat> niemeyer, unit-000000 ? relation-00000? without consulting the topology.. the unit doesn't know it services, nor the relation it services.
<hazmat> its
<niemeyer> hazmat: Exactly.. it doesn't know its relationships..
<hazmat> niemeyer, its more than just relationships.. although arguable the relationships are lending identity here.. but for example the unit doesn't even know its name
<niemeyer> hazmat: A bit like saying that the identity of /home/hazmat is in /etc/passwd..
<niemeyer> hazmat: There's information in there about it, but it stands on its own
<hazmat> niemeyer, but i know hazmat is the user name just from the path.. in the case of units.. the name itself is unknown
<niemeyer> hazmat: hazmat is your id.. like unit-000000
<niemeyer> hazmat: Your name is K.T.
<niemeyer> hazmat: and is in /etc/passwd
<niemeyer> hazmat: This is another thing I'm wondering about that.. it's not clear why we're using the service name rather than the id on the ACLs
<hazmat> niemeyer, we probably should be using the id everywhere for acls
<hazmat> niemeyer, i started doing that when i was working on machines more recently.. the machine_state.id == 0, 1, 2.. is not particularly useful in this context for identification
<hazmat> er. internal_id everywhere
<niemeyer> hazmat: Uh.. why not?  It's effectively the same thing?
<hazmat> niemeyer, because the acl is a global namespace of principals, we should be able to identify a principal from its name, names like '0', '1', are rather ambigious compared to 'machine-xyz'
<niemeyer> hazmat: Gotcha
<hazmat> niemeyer, and in the case of service names, we don't prevent service name reuse
<niemeyer> hazmat: Either way, "0" in this context is akin to "wordpress/2" for a unit name.. it's really oriented for users
<hazmat> yup
<niemeyer> hazmat: When we're handling it internally, internal_id is good
<hazmat> hmm.. hard to store the otpprincipal data on the domain state with a known principal name against a sequence node, the name is ..
<hazmat> perhaps an additional topology index matching domain objects to principal names
<niemeyer> Yeah.. trying to lock the drawer with the key inside
<niemeyer> hazmat: Actually, not really.. what's the actual issue?
<niemeyer> hazmat: The otp just needs to be created beforehand
<niemeyer> hazmat: so that the name may be stored within the domain node, if I get what you mean
<niemeyer> Okay.. I'm heading to a dinner with a friend that is in town.  Will be happy to talk about these details tomorrow morning.
<hazmat> niemeyer, the otp creates a named principal, typically that should correspond to the identity of the domain object, but for sequence nodes, identity is rather ambigious.
<hazmat> i could just use random names based on type...
<hazmat> and store in the topology, but then the lookup process for the identity token is complicated
<hazmat> easier for sequence nodes to just be updated with acls post creation
<jimbaker> negronjl_, i need to do some additional refactoring and docs on robust-hook-exit branch per the review. then i can merge it into trunk
<negronjl_> jimbaker: thx I appreciate it
<jimbaker> negronjl_, np. occasionally these branches prompt some appraisal that there's too much previous tech debt, so that's why it's taking some additional time
 * SpamapS thinks the only tech debt that is troubling is all the hoops we jump through marked "Twisted"
<jimbaker> SpamapS, i think the attention we are paying to move to deterministic tests is a good one, but it is hard to do for sure
<jimbaker> it was so much easier to just write tests that would simply sleep 200 ms and sweep the problems under the rug
#ubuntu-ensemble 2011-07-28
<SpamapS> haha
<jimbaker> i'm pretty much convinced that Twisted is the best choice for this in the python frameworks
<SpamapS> It drives me *batty*
<jimbaker> SpamapS, oh sure, it definitely does do that
<jimbaker> i personally think that we could write equally robust and deterministic tests using threaded code, along with some conventions. the advantage over twisted would being able to use any arbitrary library. but we would have to come up with some good conventions that at the end of the day do look like go
<SpamapS> Go is a nice idea.. but at what point do you stop forward progress to rewrite in it?
<jimbaker> SpamapS, progress is so slow right now because testing is currently too hard
<jimbaker> so i think that answers it for me
<SpamapS> I found it rather easy to do tests for LXC.. but.. thats because I made LXC block.
<jimbaker> SpamapS, correct. blocking is much easier to work with. that's why i mentioned using threads. (i was thinking of the paper you shared)
<SpamapS> Yeah it pretty much blows up all of the arguments I've heard for event based programming
<jimbaker> the inline callbacks model and how it supports coroutines through a trampoline basically gives us what we want there, except that constantly need to remember our calling conventions. so stuff randomly blows up if we forget. and we lose the stack.
<jimbaker> in comparison goroutines are very nice
<SpamapS> Whats the obsession w/ concurrency in a single process? I honestly can't see more than 1000 things happening at any one time in any one of the agents or clients... thats well within the capabilities of multi-processing.
<jimbaker> well, i'm quite certain that multiprocessing would be the wrong way to architect it
<jimbaker> we have i/o concurrency here
<jimbaker> so not compute bound
<kirkland> m_3: around?
<_mup_> ensemble/robust-hook-exit r291 committed by jim.baker@canonical.com
<_mup_> Doc strings and comments about exit vs end
<_mup_> ensemble/robust-hook-exit r292 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/robust-hook-exit r293 committed by jim.baker@canonical.com
<_mup_> PEP8
<_mup_> ensemble/trunk r286 committed by jim.baker@canonical.com
<_mup_> merge robust-hook-exit [r=niemeyer,hazmat][f=810808]
<_mup_> Distinguishes between a process exits and the process has closed its
<_mup_> file descriptors, so that hook output is captured properly without
<_mup_> unnecessarily waiting on misbehaving hooks with child processes that
<_mup_> haven't closed their FDs.
<jimbaker> negronjl, fix released into trunk
<negronjl> jimbaker:  Thanks man
<jimbaker> negronjl, cool, hopefully this works well as we expect for cloudfoundry
<_mup_> ensemble/states-with-principals r299 committed by kapil.thangavelu@canonical.com
<_mup_> integrate principal creation/group management into relevant domain states (service, unit, machine)
<hazmat> jimbaker, i think the greenlet approach might be a bit nicer
<jimbaker> hazmat, the problem i see with that is you cannot switch across threads with greenlets. but it's required to use different threads when working with ZK (or at least the current c api)
<jimbaker> obviously it can be made to work, but so much of the greenlet advantage is seen when using switch
<m_3> kirkland: hey man... ssup?
<hazmat> jimbaker, you can definitely switch across threads with greenlets
<hazmat> jimbaker, you can wait on another greenlet to finish and capture its return value and you can yield control of a thread
<sidnei> SpamapS, do you happen to be around?
<jimbaker> hazmat: http://packages.python.org/greenlet/#greenlets-and-python-threads - "Greenlets can be combined with Python threads; in this case, each thread contains an independent "main" greenlet with a tree of sub-greenlets. It is not possible to mix or switch between greenlets belonging to different threads."
<SpamapS> sidnei: here now, sup?
<sidnei> SpamapS, oh, was checking #736216, then i realized i enabled proposed but never actually installed the package from proposed.
<_mup_> Bug #736216: bzr crashed with error in _curl_perform(): (28, 'SSL connection timeout at 298225') connecting to Launchpad <amd64> <apport-crash> <error-reporting> <https> <natty> <running-unity> <verification-needed> <Bazaar:Invalid> <Launchpad itself:Invalid> <bzr (Ubuntu):Invalid> <curl (Ubuntu):Fix Released> <bzr (Ubuntu Maverick):Invalid> <curl (Ubuntu Maverick):In Progress> <bzr (Ubuntu Natty):Invalid> <curl (Ubuntu Natty):Fix Committed>
<sidnei> incidentally seems like niemeyer was hitting it today with bzr
<SpamapS> sidnei: Cool. bzr has a micro-release exception, so not all the bugs have to be verified. The package just has to sit in -proposed for 7 days and pass the test suites.
<sidnei> SpamapS, yes, but the bug is in curl, and bzr is affected due to python-pycurl linking against libcurl3-gnutls
<SpamapS> OH
<sidnei> anyway, i enabled proposed but forgot to install the upgrade from proposed until today. seems like the updated package fixes the problem, but i will leave landscape-client running overnight and check the logs in the morning to be real sure.
 * hazmat wonders if his diy nas will boot
<jcastro> SpamapS: good luck today on your talk, knock em dead!
<kooolhead11> hi all
<kim0> hazmat: I'm interested to know more about ur nas
<hazmat> kim0, nutshell. recycling old diy desktop (core2duo + 4g ram), into a cooler master centurion 590 case, a pair of 5x3 drive cages, and 8 port sata controller, i'm pretty set on a zfs setup at this point (freenas), although i'm going to see if i can test btrfs before i do. sadly my first problem just cropped, need an add on video card, no mb graphics.
<kim0> hazmat: was just checking out btrfs since I converted my /home to that (getting too many bad sectors lately). so btrfs can't raid5/6 yet, no working fsck, and no raid conversion yet. Otherwise, seems functional :)
<niemeyer> Good mornings!
<kim0> Morning 
<fwereade> niemeyer: heyhey
<hazmat> kim0, i have hopes for btrfs, and keeping check in on the dev, but it still seems a bit experimental for trustworthy storage.. i'm planning on using  raidz2 (3 drive fail to loose data) mode of zfs across eventually 14 drives (starting with 6 atm), in two vdevs of 5-7 each
<kim0> yeah although fedora is switching to it, I think it's too early to trust it with family photos :)
<SpamapS> jcastro: thanks.. I may still change the pics if I have some time.. but for now.. its lolcats ftw
<jcastro> SpamapS: heh, when in doubt, go with what you know
<kim0> hehe
<niemeyer> hazmat: Physical drivers?
<niemeyer> hazmat: Physical drives?
<hazmat> niemeyer, 6 atm (5 2tb, 128 gb ssd from old laptop).. total capacity is 14 with the 8-port sata card (not raid) and 6 mb sata slots
<hazmat> as far as picking a sata controller with good foss driver support, i found this blog post very helpful.. http://blog.zorinaq.com/?e=10
<fwereade> hey all, I'm confused about something
<hazmat> niemeyer, re security discussion from yesterday, let me know when you've got time to pick it back up.. 
<hazmat> fwereade, what's up?
<fwereade> you know we get authorized keys and send them to instances... what do we actually use them for?
<fwereade> I thought we were meant to be doing all communication via ZK?
<hazmat> fwereade, ssh into the machine
<niemeyer> hazmat: Ok.. I have time right now, but I think I'd like to spend some more time with it if you don't mind, so that I can be more helpful.
<fwereade> is it only a sort of emergency damn-I've-screwed-up channel
<niemeyer> hazmat: I've thought further about the problem yesterday, and have some ideas, but these ideas have holes still
<hazmat> fwereade, we use that for the debug hooks command and ssh, and in future scp
<hazmat> niemeyer, i've got a nice solution
<niemeyer> hazmat: I need to stand next to a white board for a moment
<fwereade> hazmat: ah, cool
<niemeyer> hazmat: Oh, what's that?
<fwereade> hazmat: thanks :)
<hazmat> fwereade, np
<hazmat> niemeyer, well first.. i think its useful to think of the topology as an identity db, not particularly relevant to the solution.. but most sequence nodes don't have identity prior to their existance in the topology
<hazmat> for example look at a failure mode for creating a machine, it leaves an unused sequence node
<niemeyer> hazmat: Ok
<hazmat> niemeyer, so in particular, i just backed off doing acls for these nodes at creation time, and instead update the acl using policy rules after they've been added
<hazmat> to the topology
<niemeyer> hazmat: Yeah, that was the direction of my thinking as well
<niemeyer> hazmat: Have you figured what's the problem with this too?
<hazmat> niemeyer, haven't run into anything yet, do you foresee an issue?
<niemeyer> hazmat: Yeah.. what's the reason why we create the entry before putting it in the topology?
<hazmat> niemeyer, ah.. yes there is a gap i did notice that
<hazmat> before we apply acls
<niemeyer> hazmat: Right, we're going back to the original problme
<hazmat> niemeyer, not really
<niemeyer> hazmat: Why not?
<hazmat> niemeyer, the one acl that the standard policy can put forth is an owner policy, which free us from worrying about an acl security gap, so we can modify it post topology add.
<niemeyer> hazmat: THe reason why we create the entry before putting it in the topology is because once the topology is touched agents will notice that and will act accordingly
<hazmat> niemeyer, as to your question of why create before topology, so we can have an atomic update of the topology which drives  identity/rel watches and that's its reflective of what's present
<niemeyer> hazmat: So to make it "atomic" we change the topology once the system is ready to take it
<niemeyer> hazmat: If we change in the suggested way, we'll put in the topology something that is not usable
<hazmat> niemeyer, yeah.. the gap
<hazmat> hmm
<hazmat> niemeyer, so that brings up back to nodes should have identity upon creation via contents, but that's complicated as well for the named principal which is going to be reference the sequence value
<niemeyer> hazmat: Yeah, I feel like this is an ugly hack we shouldn't dive into
<hazmat> niemeyer, alternatively we could write support for the new zk multi-node api into the libzk and zkpython
<hazmat> niemeyer, yeah.. i'd prefer to avoid content sniffing as well
<hazmat> bummer
<hazmat> yeah.. a whiteboard sounds good
<hazmat> and some coffee
<niemeyer> hazmat: I think your suggestion is going into the right direction.. let's just ponder for a second on the new problem that arrives from that.  I have a vague idea, just need to see if it's reasonable or not.  Give me a few minutes to brainstorm on this
<hazmat> niemeyer, so the failure modes on these from a topo conflict are pretty much non-existant
<hazmat> its just a retry
<hazmat> the sequence nodes are guaranteed unique
<hazmat> we could have a local topology for the policy to utilize before saving it to the final topology, effectively making the acl update part of the topo change function, that finishes prior to return contents
<hazmat> s/finishes/modifies the acl prior to returning the topology contents
<hazmat> hmm.. it also needs to do the otp there, as the rules will likely reference
<niemeyer> hazmat: Why?
<niemeyer> hazmat: The OTP doesn't have to change
<niemeyer> hazmat: on every iteration of the retry, that is
<jcastro> jamespage: hey do you think it's feasable to replace pad.ubuntu.com with -lite deployed via ensemble? That would be a nice dogfood for next UDS.
<jamespage> jcastro: hmm - maybe; its quite bleeding edge ATM and requires stuff which is not in the archive
 * jcastro nods
<jamespage> jcastro: will probably hit the 'is-it-packaged' challenge 
<jcastro> yeah but if we have an ensemble formula we should stop caring about that right?
<jamespage> jcastro: :-)
<jamespage> jcastro: well the formula pull in and builds quite a few bits and pieces not from Ubuntu
<jcastro> I thought that was a feature, but yeah, I can see that.
<jamespage> jcastro: just pushed an initial version of the formula BTW
<jcastro> jamespage: I saw, that's what prompted me to ask.
<jcastro> well, we can certainly dogfood it during ensemble team events. :)
<jamespage> don't try it with haproxy yet......
<jamespage> but works OK with a mysql backend
<jcastro> ok
<jcastro> yeah this is one of those that just will demo great and let people try it immediately.
<jamespage> agreed
<hazmat> niemeyer, true the otp identity is stable and doesn't ref the topology.
<_mup_> ensemble/debug-log-relation-settings-changes r275 committed by jim.baker@canonical.com
<_mup_> Merged trunk and resolved conflicts
<robbiew> m_3: hey...did you ever hear back from the MediaWiki folks?
<m_3> robbiew: nopw
<m_3> I'll bug Ryan again
<robbiew> m_3: cool...I think SpamapS was planning on stopping by the offices...but not sure when
<m_3> yeah, #wikimedia-tech's quiet at the moment... no sign of ryan
<robbiew> probably OSCon related
<m_3> hopefully
<m_3> it's still early
<niemeyer> Lunch time
<niemeyer> fwereade: ping
<hazmat> bcsaller, jimbaker, fwereade, niemeyer weekly meeting?
<bcsaller>  trying g+ again?
<niemeyer> +1
<niemeyer> jimbaker: ping?
<jimbaker> niemeyer, hi
<jimbaker> niemeyer, so is this g+ meeting happening?
<niemeyer> jimbaker: Yeah, it's rolling
<niemeyer> jimbaker: We'll invite you
<jimbaker> niemeyer, ok
<niemeyer> jimbaker: What's your gmail address
<m_3> SpamapS: break a leg!
<SpamapS> :)
<SpamapS> t minus 3 hours 16 minutes
<SpamapS> err
<SpamapS> 4 hours
<m_3> time to enjoy the conf
<SpamapS> s/enjoy/document/ ;)
<m_3> niemeyer: please add me to the right lp group to edit the ensemble.ubuntu.com wiki when you get a chance
<niemeyer> m_3: Done
<m_3> thnsk
<jimbaker> fyi, james.edward.baker@gmail.com
<hazmat> niemeyer, one problem i have with current review queue stackand security.. the fixes for security-rules depend depend on stuff latter in the stack.
<hazmat> the rules are standalone as is, so i think addressing some of the comments, and then moving forward with the rest of it, till a latter point in the branch stack probably makes the most sense
<niemeyer> hazmat: How do you mena?
<niemeyer> hazmat: When there is a dependency chain like this, the natural thing to do is to review them in order, apply comments in order, and merge in order
<niemeyer> hazmat: Jumping back and forth will make things more complex
<hazmat> niemeyer, right.. security rules are an integration piece though, i'm doing integration work and branches right now, but the rest of the branches in the stack that's part of review, including those that follow security-rules are implementation focused. some of them may contain fixes that will be needed to finish up security-rules. currently its standalone, effectively unused (though tested), but unfortunately merged into the rest of the implementati
<hazmat> on stack of branches
<hazmat> i guess its doable as is, it might just be a while till everything gets merged, which means i'll have a lot of pending branches
<niemeyer> hazmat: Yeah, that's why I say that when the dependency has been introduced and they are in a chain, the natural thing to do is to apply reviews and move forward
<niemeyer> hazmat: My suggestion is to get the bottom of the stack, fix it, and merge it
<niemeyer> hazmat: and so on
<hazmat> niemeyer, yeah.. its an artificial dependency at this point which is going to hold up the merge of the other 4 branches in review
<niemeyer> hazmat: and then move forward with everything in trunk
<niemeyer> hazmat: It is, but it's there..
<niemeyer> hazmat: The problem you have is that you're trying to push things forward while there are reviews pending
<niemeyer> hazmat: If you take care of the branches that are in review and get them merged, the problem disappear
<niemeyer> s
<hazmat> niemeyer, perhaps.. the real problem is that its an unrelated branch, which introduced before the implementation of other features, ie. an artifical dependency... i probably shouldn't have pushed it to review.. but i wanted to get feedback on the approach... not a big deal, it can be redressed in the current process.
<jcastro> any idea what could cause this? http://pastebin.ubuntu.com/653982/
<jcastro> oh nevermind
<jcastro> it's that minute delay thing
<fwereade> gents, I'm very sorry I missed you
<fwereade> in my defence, it did start almost 2 hours after the theoretical end of my day 
<niemeyer> fwereade: Not a problem.. we were entirely aware of that
<niemeyer> fwereade: We decided to go forward just because we've been missing it again and again..
<niemeyer> fwereade: We should do another one tomorrow, or very soon anyway, with you
<fwereade> niemeyer: anything I should know from it?
<niemeyer> fwereade: Nothing critical to what you've been doing.. hazmat is sending notes to the list
<niemeyer> fwereade: Btw,
<fwereade> niemeyer: cool
<niemeyer> fwereade: Just as a note for the start of your day tomorrow
<niemeyer> fwereade: It looks like some of the branches you've been pushing for review are missing part of the workflow
<niemeyer> fwereade: They're not showing in the Kanban
<niemeyer> fwereade: No rush, but worth checking early tomorrow so they get into the review flow
<fwereade> niemeyer: hmm, lack of bugs perhaps?
<niemeyer> fwereade: Quite possibly
<niemeyer> fwereade: Or not assigned to the right milestone
<niemeyer> fwereade: Or not in progress.. or.. :)
<fwereade> niemeyer: hmm, probably all 3 for most of them
<niemeyer> fwereade: Again, no big deal.. just worth checking
 * fwereade looks shifty
 * hazmat lol
<fwereade> niemeyer: cheers :)
<niemeyer> fwereade: Sorry for missing you on the meeting
<fwereade> niemeyer: no worries :)
<niemeyer> fwereade: I hope we automate that at some point.. I'm conscious this procedure is too heavy
<fwereade> niemeyer: it's sweetness and light compared to what I'm used to :)
<fwereade> niemeyer: however, I think what I'm used to gave me a mild phobia of kanban boards
<niemeyer> LOL
<fwereade> niemeyer: you'd almost think the software was written by anti-agile activists, it seemed to make a point of putting you off
<niemeyer> fwereade: "Wait.. I see you've pushed too branches in a row.. why are you doing things so fast!?"
<fwereade> niemeyer: haha
<_mup_> Bug #817732 was filed: can't communicate with an orchestra server running cobbler <Ensemble:New> < https://launchpad.net/bugs/817732 >
<niemeyer> bcsaller: re. the order of things on LXC, some vague ideas:
<niemeyer> <niemeyer> 1) Make multiple units work on a single machine across the board (no LXC)
<niemeyer> <niemeyer> 2) Make local deployments work with one or multiple units (no LXC)
<niemeyer> <niemeyer> 3) Make LXC work to deploy units 
<niemeyer> bcsaller: Of course, each of these can be split further
<bcsaller> yeah
<niemeyer> bcsaller: But these high-level steps feel like making things more manageable
<bcsaller> sounds like a good place to start
<_mup_> Bug #817735 was filed: cobbler system names can change <Ensemble:In Progress by fwereade> < https://launchpad.net/bugs/817735 >
<hazmat> bcsaller, i'm writing up meeting notes, is your next post-config item spec for containers or co-location?
<bcsaller> yes
<hazmat> "or" }
<hazmat> ?
<hazmat> bcsaller, both?
<bcsaller> of, well, specs for both, but co-location is first
<hazmat> bcsaller, cool, thanks
<_mup_> Bug #817736 was filed: FileStorage interface is inconsistent <Ensemble:In Progress by fwereade> < https://launchpad.net/bugs/817736 >
<_mup_> Bug #817738 was filed: duplicated code in ec2 LoadState and SaveState operations <Ensemble:In Progress by fwereade> < https://launchpad.net/bugs/817738 >
<_mup_> Bug #817739 was filed: no FileStorage implementation for orchestra <Ensemble:In Progress by fwereade> < https://launchpad.net/bugs/817739 >
<_mup_> Bug #817740 was filed: orchestra provider can't discover running zookeepers  <Ensemble:In Progress by fwereade> < https://launchpad.net/bugs/817740 >
<_mup_> Bug #817743 was filed: orchestra provider can't launch an instance that does anything useful <Ensemble:In Progress by fwereade> < https://launchpad.net/bugs/817743 >
<fwereade> niemeyer: btw, re: orchestra development strategy
<fwereade> niemeyer: you're absolutely right about the shadow-trunk, that's a nicer model for future work
<fwereade> niemeyer: but I think I'm missing something: how does that help us help us transition the code already in adres' branch?
<fwereade> niemeyer: well, I guess it doesn't have to
<fwereade> niemeyer: it's just that it works, and I'd prefer to transition the working ocde smoothly if I can
<fwereade> niemeyer: ...and as soon as I say it in public, it seems much dumber than it did in my head
<fwereade> with one more branch (in progress), I think we'll have something that's a viable base for RoAkSoAx to branch from
<niemeyer> fwereade: There's no way to transition his work into something compatible with trunk in a mergeable way besides doing it piece by piece
<fwereade> niemeyer: yep, I think you're right, it just seems like a shame :)
<niemeyer> fwereade: Spikes are spikes.. we have to recognize them by the benefit they provide, but transitioning from a spike into a stable implementation takes known pain
<fwereade> niemeyer: and, to be fair, less total pain than "it works? SHIP IT!"
<niemeyer> fwereade: Exactly
 * fwereade has an uncomfortable feeling he may have quoted directly from someone at his first job
<robbiew> lol
<SpamapS> crazy idea
<SpamapS> we should have a "sneakernet" provider
<SpamapS> all it does is create nodes in zookeeper...
<SpamapS> and feed you back admin credentials to spawn machine agents
<SpamapS> that way you can join existing machines to ensemble
<niemeyer> SpamapS: As a first step, defining a formula for an external service would probably work
<robbiew> SpamapS:  done your talk yet?
<adam_g> robbiew: starts in 2 min
<robbiew> adam_g: ah...how's the attendance?
<robbiew> I noticed it was still in the OS track b/c of the previous upstart topic
<robbiew> not cloud :/
<_mup_> ensemble/debug-log-relation-settings-changes r276 committed by jim.baker@canonical.com
<_mup_> Fixed faulty merge
#ubuntu-ensemble 2011-07-29
<_mup_> ensemble/debug-log-relation-settings-changes r277 committed by jim.baker@canonical.com
<_mup_> Delay __str__ of relation setting changes
<niemeyer> Time to bake a new mgo release
<SpamapS> niemeyer: talk went very well
<SpamapS> Finished a bit early but had tons of questions.
<niemeyer> SpamapS: Oh, tell me about it!
<niemeyer> SpamapS: Was it videoed?
<SpamapS> sadly no
<SpamapS> Would have been good to have it on tape actually
<niemeyer> Awww
<niemeyer> SpamapS: So, how was it received?
<SpamapS> Very well
<SpamapS> There was one.. semi-hostile question asking how we could possibly do hardware provisioning w/ cobbler...
<SpamapS> But I presented that as "not done yet" so it had no real legs.
<niemeyer> SpamapS: Huh
<SpamapS> He was just surprised we'd even try that when Open Stack is doing something similar but basically rolling their own cobbler.
<niemeyer> Heh
<SpamapS> It was the only non-excited question
<niemeyer> SpamapS: Sweet
<SpamapS> People saw very well how it is not the same as puppet or chef, and how the relationship definitions make it easier to deploy complex setups.
<SpamapS> *Most* people are interested in when stacks will be ready... even though I barely mentioned them.. people see the value in the idea of repeating the entire deployment.
<niemeyer> SpamapS: Cool, yeah, that's important indeed
<SpamapS> A couple people said they had done similar things w/ zookeeper and thought it was a good choice for this type of info.
<niemeyer> SpamapS: True.. I do hear about people using zk in a more custom setup, but doing similar stuff for config deployment and discovery
<SpamapS> The Nebula guys were interested in how Ensemble could be used to add services to the openstack dashboard.
<niemeyer> SpamapS: Hah, neat
<niemeyer> SpamapS: Any interesting follow up conversations from there?
<SpamapS> I didn't get a lot of people saying they wanted to write formulas.. which I was hoping for.. but.. the interest level was as much in joining the development of ensemble itself as anything.. so thats a nice surprise.
<SpamapS> The best follow up was w/ the Nebula guys. They said they were trying to think of a way to encompass services in a generic way and so they liked the idea of just using ensemble for that.
<niemeyer> SpamapS: That's half expected.. we'll have a _lot_ more people using it than developing
<niemeyer> (hopefully ;-)
<_mup_> ensemble/debug-log-relation-settings-changes r278 committed by jim.baker@canonical.com
<_mup_> Modified output format of change items
<niemeyer> SpamapS: That's very promising
<SpamapS> niemeyer: well the audience is open source developers.. so its not a surprise that they'd want to get deep into code
<SpamapS> Anyway, I invited a lot of people to the bof which starts in 55 minutes.. will send an email with my report on both events tomorrow.
<niemeyer> SpamapS: OSCON seems quite varied in terms of audience
<SpamapS> niemeyer: indeed.. all over the place
<SpamapS> OpenStack seems to be the belle of the ball tho. :)
<niemeyer> SpamapS: Fantastic
<niemeyer> SpamapS: That's good.. I hope people make it nice :)
<jcastro> SpamapS: glad to hear it went well
<SpamapS> jcastro: as predicted, lolcats were lol-flat
<jcastro> SpamapS: when you get back let's do a call,  I want all the nitty gritty reactions, etc.
<jcastro> I told you dude, last year's joke!
<SpamapS> jcastro: only joke that hit was me acting excited that we were ushering in the singularity .. one step away from Skynet. ;)
<SpamapS> jcastro: I wish I had time to change it. :-P no worries, the content was quite compelling.
<jcastro> heh
<jcastro> glad to hear people didn't confuse it with puppet/chef
<jcastro> I've been getting alot of that
<SpamapS> jcastro: A couple of people did make the point that puppet/chef can do some of the same things. But once they get that the formula is a self contained unit they let go of that.
 * jcastro nods
<jcastro> SpamapS: hey did jono attend your talk?
<SpamapS> no haven't seen him
<SpamapS> big conference center.. and our tracks do not collide. :-P
<SpamapS> adam_g was there.. and can maybe offer his own impressions.
<_mup_> ensemble/debug-log-relation-settings-changes r279 committed by jim.baker@canonical.com
<_mup_> Test new formatting for long strings
<_mup_> ensemble/debug-log-relation-settings-changes r280 committed by jim.baker@canonical.com
<_mup_> Changed use of nonlocal in apply_changes
<_mup_> ensemble/debug-log-relation-settings-changes r281 committed by jim.baker@canonical.com
<_mup_> Fix test_invoker with respect to new format
<_mup_> ensemble/debug-log-relation-settings-changes r282 committed by jim.baker@canonical.com
<_mup_> Capitalization
<_mup_> ensemble/debug-log-relation-settings-changes r283 committed by jim.baker@canonical.com
<_mup_> Log testing of hooks must wait for the ended deferred
<_mup_> ensemble/debug-log-relation-settings-changes r284 committed by jim.baker@canonical.com
<_mup_> PyFlakes
<kim0> Morning everyone
<raphink> hi kim0 
<kim0> raphink: hey
<shang> hi all, when I do the ensemble add-relation mydb:db myblog, what is the ":db" means?
<shang> are there any other options that I can use?
<shang> it's not just a relationship name, it has to be db, otherwise the system won't take it
<kim0> shang: Hi there
<kim0> shang: seems like you were at cloud days ?
<shang> kim0: hi, thanks for the great session the other day
<kim0> cool
<shang> kim0: I was actually watching ur youtube video and start plaing with ensemble :-)
<shang> kim0: the zero to ensemble in 5 minutes was awesome!!
<kim0> ah great .. looking into the formulas to answer your question
<kim0> shang: ok so basically, looking at metadata.yaml .. you will find that the mysql formula provides many possible relations
<kim0> like, db, db-admin, shared-db, master ..
<shang> ah!
<kim0> so that :db that you add there, is needed to know exactly which relation to use
<shang> right, initially, I thought the name could be anything
<kim0> it could be anything for the formula writer :)
<shang> just a name for the relation
<shang> right, :D
<kim0> shang: so how are you enjoying your ensembling
<shang> yeah, very fun....
<kim0> awesome :)
<shang> kim0: still looking into how the ensemble and orchestra work together tho
<kim0> You'll love it even more when you start writing a formula
<shang> kim0: have to read through the logs again
<kim0> a ha .. that's still work in progress
<shang> yeah, I will definitely give that a try
<kim0> playing with ensemble on ec2 might be the path of least resistance for now
<kim0> but sure .. have fun :)
<kim0> shang: let me know once you feel confident enough to write your first formula 
<shang> yeah, I test a couple times on ec2 already
<shang> kim0: haha, I will be sure to let u know! :D
<kim0> cool :)
<shang> kim0: so the idea is to be able to use ensemble on openstack (like ec2 ) and deploy openstack (orchestra)?
<kim0> I think that's correct yes, bu I might be missing some details
<kim0> ensemble will definitely use openstack like it does ec2
<kim0> and ensemble will integrate with orchestra but I'm not entirely clear on the details there
<shang> ok
<shang> i will catch up with the logs and see what's going on :)
<kim0> cool
<kim0> shang: and you can wait for the us to wake up and ask again, you'll get more details :)
<shang> kim0: yeah, I will do more reading first :)
 * kim0 nods .. have fun
<shang> kim0: thanks again for all the hard works!
 * kim0 works on a new and cool ensemble blog post
 * shang wrote a few ensemble 101 post, but will write more fun stuff in the future!
<kim0> shang: throw them over and I can tweet them from ubuntucloud :)
<shang> kim0: was very basic, but this is like a ensemble journey to me ;) http://voices.canonical.com/shang.wu/category/ensemble/
<kim0> cool :)
<TeTeT> shang: nice writeup, will follow it eventually and do some more testing. Still need to write my very first own formula
<shang> TeTeT: yeah, I will try the same!
<niemeyer> Greetings!
<niemeyer> Hey, happy sysadmin day!
<_mup_> ensemble/expose-provision-machines r293 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/expose-provision-machines-reexpose r302 committed by jim.baker@canonical.com
<_mup_> Merged upstream expose-provision-machines
<_mup_> ensemble/expose-provider-ec2 r307 committed by jim.baker@canonical.com
<_mup_> Merged upstreadm expose-provsion-machines-reexpose
<niemeyer> Got to restart.. kernel upgrade pending for a while
<_mup_> ensemble/expose-provider-ec2 r308 committed by jim.baker@canonical.com
<_mup_> Removed EC2 operation classes in favor of functions for security group functions
<_mup_> ensemble/expose-provider-ec2 r309 committed by jim.baker@canonical.com
<_mup_> PEP8 & PyFlakes
<_mup_> ensemble/expose-provider-ec2 r310 committed by jim.baker@canonical.com
<_mup_> Reverted a side PEP8 fix, apparently it's possible to have 'd.has_key(key)' and 'key in d' to be different
<jimbaker> too bad, i've been annoyed about that pep8 violation for a while in test_base.py, but it's definitely not in scope for this branch to figure out why that would be the case
<kim0> Ensemble crunching UFO data with hadoop :) http://cloud.ubuntu.com/2011/07/ubuntu-takes-ufos-to-the-cloud/
<niemeyer> fwereade: Wow.. Lots of new branches in review.. were all of these just pending bugs?
<m_3> kim0: nice!
<jimbaker> kim0, i think you meant corpus, not corpse ;)
<kim0> lol .. fixing
<jimbaker> but when it's about ufos, who knows?
<niemeyer> ROTFL
<m_3> ah, corpse fits in with the theme though
<fwereade> niemeyer: they were all features tidied and moved from andres' branch, which I added bugs to yesterday to ensure they showed up in the queue ;)
<niemeyer> fwereade: and I thought I was free from reviews.. tsc tsc
<fwereade> niemeyer: if yu weren't expecting them, I may have misunderstood you yesterday?
<niemeyer> fwereade: I wasn't expecting them only in the sense I had no idea we had that many branches ready to land already. :-)
<niemeyer> fwereade: It's awesome to have them, though!
<fwereade> niemeyer: be careful what order you do them, some of them have >1 prerequisite so the diffs look larger than they really are
<fwereade> niemeyer: cool :)
<niemeyer> fwereade: Ok.. is it well documented in the description/metadata of the MP?
<fwereade> niemeyer: should be
<niemeyer> fwereade: Cool
<niemeyer> fwereade: I'll find my way thne
<niemeyer> then
<niemeyer> Looks like that's my afternoon today
<fwereade> niemeyer: it's just that the MP can only understand a single prereq (AFAICT)
<niemeyer> fwereade: Yeah, but if you documented, I'll find a way to get a reasonable diff
<niemeyer> fwereade: I just won't be happy if I have to guess what are the pre-reqs :-)
<fwereade> niemeyer: cool :)
<jimbaker> fwereade, that's a whole lot of branches, cool
<fwereade> jimbaker: cheers, sorry I was unwittingly hiding them all week :)
<jimbaker> fwereade, do please take a look at the my branch expose-provider-ec2. i think it's ready for merging since i've addressed your and niemeyer's questions
<fwereade> jimbaker: just saw that had landed, will do
<jimbaker> i will be on vacation next week, so i'd like to see this land soon, and not wait to austin
<jimbaker> the requested changes were easy, the painful part was dealing with the refactoring that happened on trunk
<niemeyer> Will get some lunch and bbiab
<_mup_> ensemble/states-with-principals r300 committed by kapil.thangavelu@canonical.com
<_mup_> security convience api for integration work.
<_mup_> Bug #818139 was filed: orchestra: can't terminate instances <Ensemble:In Progress by fwereade> < https://launchpad.net/bugs/818139 >
<fwereade> I think I need to stop at the correct time today: so, later all, enjoy your weekends :)
<m_3> fwereade: later man
<fwereade> (it's funny, these timezones make me feel like a slacker ;))
<m_3> ha!
<m_3> it works the other way around too
<hazmat> fwereade, have a good weekend
<m_3> we get in and you've got gobs of stuff there
<fwereade> haha, jolly good, as long as the pain is spread evenly :p
<fwereade> cheers
<_mup_> ensemble/states-with-principals r301 committed by kapil.thangavelu@canonical.com
<_mup_> incorporate the workaround for zk-770 fast auth directly into principal.attach methods.
<m_3> jcastro, kim0: language check please... ensemble.ubuntu.com/Interfaces (and the mysql one too)
<jcastro> on it
<jcastro> m_3: I'm going to just make you an ensemble interface template
<m_3> xclnt
<m_3> jcastro: they'll be scripted if that makes any difference for the template
<kim0> m_3: is writing a formula consumers really necessary ? mysql consumers list is gonna grow large quick
<jcastro> m_3: oh ok, so they'll just all output like the mysql one?
<m_3> jcastro: yeah, that's the idea
<m_3> kim0: nope, not necessary
<jcastro> oh ok, we won't need a template then
<SpamapS> Yeah drop the consumers
<m_3> kim0: it'd be nice though... let's say I'm writing a new formula that consumes mysql... I know where to copy from
<SpamapS> If we want to auto-generate that from the repository we will, but don't manually maintain it
<m_3> yeah, the info is in the repo
<jcastro> m_3: you might want to add CategoryInterface at the bottom of each so moin can group them all
<m_3> was gonna try to not _manually_ maintain any of it
<m_3> jcastro: thanks, I'll add
<jcastro> m_3: and <<Include(Header)>> at the top, so it adds the nice menu thing at the top, I've added a submenu for Interfaces
<m_3> do I add the "Interfaces" page to CategoryCategory to get CategoryInterfaces to show up?
<SpamapS> m_3: this is *way* too hard to understand. Fields for what? when do they get set (on join or on changed?) .. also does it wait for incoming stuff?
<jcastro> yeah
<m_3> SpamapS: ok, I'll add more structure to it
 * kim0 starts weekend .. partially afk 
<niemeyer> m_3: Hmm
<niemeyer> m_3: Having a table like that at the top as a summary may be fine, but the mysql interface description is lacking the most important bits
<niemeyer> m_3: Also, note that an interface is two-sided.. there's a client and a server
<jcastro> m_3: I'll do the organizing, as long as you have the CategoryInterface on each page we'll be good, I'll go find out how to add it to CategoryCategory
<m_3> niemeyer: yeah, SpamapS was just sayign that
<m_3> jcastro: cool
<niemeyer> m_3: Who sets what, and how should the other side react
<niemeyer> m_3: At which time, etc
<m_3> niemeyer: understood
<niemeyer> m_3: We really need some prose
<m_3> niemeyer: I was actually thinking a picture
<niemeyer> m_3: Also, the interface should be lowercased: "mysql"
<m_3> niemeyer: not sure that's maintainable though
<niemeyer> m_3: Pictures are hard to maintain over time
<niemeyer> m_3: Also hard to debate on a mailing list
<m_3> niemeyer: gotcha
<niemeyer> m_3: The table is a nice touch, though, as a summary
<m_3> first pass... I'll refine
<niemeyer> Okay, reviews
<SpamapS> m_3: to me the format is more hierarchical.. there are 4 events for any relation, and thus, any interface, and 2 participants, so potentially 8 tables.   provider->joined,changed,departed,broken  and the same for requirer
<SpamapS> broken may not be useful at the interface level tho
<jimbaker> niemeyer, thanks for that approval
<m_3> SpamapS: yes, saw your email on it... that made a lot of sense
<niemeyer> jimbaker: np
<m_3> SpamapS: one thing that wasn't clear was what you meant by an interface repo as opposed to a formula repo
<niemeyer> jimbaker: Looking at expose-provider-ec2 now
<niemeyer> jimbaker: Trying to understand why we need machine_id in the provider
<jimbaker> niemeyer, sure. the problem is we need to determine the security group associated with that machine
<niemeyer> jimbaker: Machine.instance_id should uniquely identify that machine within the provider, and we have that information
<jimbaker> niemeyer, the security group must be created before the machine is launched, so the machine_id seems appropriate for this sort of info
<niemeyer> jimbaker: Why do we need two identifiers within the provider?
<jimbaker> niemeyer, we only get the machine.instance_id upon launch
<niemeyer> jimbaker: That's right.. hm
<jimbaker> niemeyer, so we need some sort of identifier that exists prior to launch, and can be uniquely associated with the machine
<jimbaker> niemeyer, this is the role of machine_id
<niemeyer> jimbaker: It is, but I'm a bit unhappy with having to cross-reference..
<jimbaker> niemeyer, agreed
<niemeyer> jimbaker: The machine in zk knows the instance id, and now the machine itself has a reference to the machine id
<jimbaker> niemeyer, sure. but we still need to have the security group created
<jimbaker> niemeyer, this is why i kept them separated in the original API
<jimbaker> niemeyer, we can remove this ugliness once we stop using security groups
<niemeyer> jimbaker: Yeah, let me ponder for a moment if there's a way to avoid having the machine id there
<jimbaker> niemeyer, sounds good
<niemeyer> jimbaker: Just so you understand, doing this means the provider cannot construct such an object anymore
<niemeyer> jimbaker: Because it can't tell what's the id of the machine
<jimbaker> niemeyer, it cannot construct an immutable object, that's true
<jimbaker> niemeyer, so maybe the better alternative is to back to the original API and pass machine_id separately from machine
<jimbaker> niemeyer, however this will not be necessary once we remove the security group requirements
<niemeyer> jimbaker: Sure, once we remove the code you're introducing the code being introduced stops being a problem ;-)
<jimbaker> niemeyer, hah
<jimbaker> niemeyer, given that the responsibilities for managing the firewall will also move to the machine agent, it does kill a lot of the code in the predecessor branches too :)
<jimbaker> niemeyer, but at least the provider API remains the same, although that's a small part
<SpamapS> m_3: I mean if we were going to put interfaces in machine readable format, we shouldn't put them in with the formulas. They' don't belong to any one formula
<m_3> ok
<_mup_> ensemble/debug-log-relation-settings-changes r285 committed by jim.baker@canonical.com
<_mup_> Fix minor formatting nits
<niemeyer> m_3: FWIW, there's no need to define the hook where the setting is made
<niemeyer> m_3: It's fine for this to be implementation specific
<niemeyer> m_3: Due to the way Ensemble works, formulas can wait until the other side provides the setting
<m_3> I'm thinking language like...
<m_3> When 'relation-joined', mysql:
<m_3> provides:
<m_3> accepts:
<niemeyer> m_3: Yeah, but that's exactly the point.. we don't have to enforce that
<m_3> When 'relation-changed', mysql: etc etc
<niemeyer> m_3: It's fine for a formula to provide the setting at its own time
<niemeyer> m_3: The other side will get told that the setting is now available
<niemeyer> m_3: Let me think of an analogy.. hmm
<m_3> niemeyer: so you would consider two different formulas setting the same parameters in different hooks to be using the same interface
<m_3> niemeyer: and SpamapS would consider those to be two different interfaces?
<niemeyer> m_3: If you come to visit me, you'll ring the bell, and I'll open the door
<niemeyer> m_3: We don't have to agree at the exact time when this happens, because when you're ready you'll ring the bell, I'll listen to it, and will open the door
<niemeyer> m_3: Yes, I'll consider the same interface, because formulas shouldn't be built on the expectation of the exact hook where something must happen
<niemeyer> m_3: The relation settings are events.. if formula A has to provide a username to formula B, formula B can sit and wait until the username is available
<m_3> ok, lemme put language around it and we can go over concrete examples
<SpamapS> niemeyer: what you're saying is, define what to do when you receive a hook, not what to do inside a hook
<niemeyer> SpamapS: Assuming you mean "when you receive a setting", right
<_mup_> ensemble/trunk r287 committed by jim.baker@canonical.com
<_mup_> merge debug-log-relation-settings-changes [r=hazmat,niemeyer][f=766317]
<_mup_> Log (at debug level) the relation setting changes that occur upon a
<_mup_> successful hook exit.
<jimbaker> formula developers, rejoice! - you can now see all of your relation setting changes in the debug log
<m_3> jimbaker: *\0/*
<jcastro> http://foss-boss.blogspot.com/2011/07/ubuntu-takes-ufos-to-cloud.html
<jimbaker> m_3, the next step is to parse the resulting debug log to generate some sequence diagrams, http://sdedit.sourceforge.net/example/index.html
<jcastro> woo, so when can we get rid of those first two steps with branching people's formulas?
<jimbaker> m_3, ok, a little bit out as a step... but definitely something that would be cool
<_mup_> ensemble/expose-provision-machines r294 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<m_3> jimbaker: that would actually be very useful... and damned sexy
<_mup_> ensemble/expose-provision-machines-reexpose r303 committed by jim.baker@canonical.com
<_mup_> Merged upstream expose-provision-machines
<m_3> jcastro: dunno man... there's fluffy stuff associated with that... like the name for principia for instance
<jcastro> heh
<jcastro> end of july!
<jimbaker> m_3, yeah, it would be a cool complement to the docs on interfaces being discussed
<jimbaker> m_3, here's how my formula actually interacts with your formulas
<jimbaker> one could see a riak ring in action - what info is being communicated as the ring is being modified with add-unit/remove-unit
<jimbaker> one swim lane could be the ensemble administrator doing commands, the other swim lanes would be the unit agents
<m_3> it'd be interesting to visually debug
<_mup_> ensemble/expose-provider-ec2 r311 committed by jim.baker@canonical.com
<_mup_> Merged upstream expose-provision-machines-reexpose
<m_3> niemeyer, SpamapS: ensemble.ubuntu.com/Interfaces mysqlA vs mysqlB?
<m_3> (general approach)
<niemeyer> m_3: Hmm.. that seems to be going away from what I had in mind
<niemeyer> m_3: What we need is paragraph of text for a human to read
<m_3> niemeyer: oh sorry, yes, I understood that... filling in text is next
<niemeyer> m_3: "The mysql interface enables a formula that requires a mysql database to have access to one.  (...) The server side of such a relation should be able to provide database information in the `username` and `password` settings to the client side (...) When the client side detects the ..."
<m_3> niemeyer: just wanted to get the presentation details straight first
<niemeyer> m_3: Yeah, sorry.. I just mean that all of these tables and hook names just shouldn't be there
<m_3> niemeyer: ok, I'll give y'all a little more polished version then
<niemeyer> m_3: Try to think about this as a consumer
<niemeyer> m_3: If you were just starting to use Ensemble, and went into a page about an interface you had to implement.. what would you like to read there?
<m_3> niemeyer: right... this was just enough content there to support our email conversation about what information should be part of the interface... certainly not done
<SpamapS> As a consumer I'd want tables of variables available to me, and a description of what they mean
<SpamapS> I'd also want to know what I can assume about the service
<m_3> I'd also want to see a list of other similar services so I could copy
<SpamapS> Nah
<SpamapS> if thats there, it needs to be generated
<m_3> oh, yeah
<niemeyer> Agreed
<robbiew> I'd also like a pony
<robbiew> SpamapS: how'd OSCon session go?
<SpamapS> robbiew: *great*
<SpamapS> robbiew: good response from all.. reasonably well attended
<robbiew> SpamapS: nice
<SpamapS> robbiew: 2 guys came to the BoF later and I think enjoyed the demo
<SpamapS> about to go to the wrap up plenary... 
<SpamapS> and then probably go to the airport and work from there till my flight leaves at 6
<robbiew> ok
<niemeyer> jimbaker: Review delivered on expose-provider-ec2
<niemeyer> jimbaker: Please let me know what you thyink
<niemeyer> jimbaker: I've covered the conversatino above there
<niemeyer> jimbaker: I really think we should do something about it
<niemeyer> jimbaker: But the suggestion is not too dramatic, I think
<jimbaker> niemeyer, checking
<niemeyer> jimbaker: Just the same thing, with a less magical API
<niemeyer> Man.. and I didn't even get to William's 500 branches yt
<niemeyer> yet
<niemeyer> I'm half-depressed
<jimbaker> niemeyer, thanks, it's definitely going to be what i work on in austin
<niemeyer> jimbaker: Hmm?
<jimbaker> but i'll try to take some stabs at it now
<jimbaker> i'm on vacation next week
<niemeyer> jimbaker: Oh, I didn't know that
<jimbaker> niemeyer, sorry about that
<niemeyer> jimbaker: LOL, you should be happy rather than sorry :-)
<niemeyer> I think I'm taking a couple of months of holiday myself until 11.10
 * niemeyer looks at robbiew 
<jimbaker> niemeyer, well it should be good, my wife has family visiting us, and they're fun people
<niemeyer> jimbaker: Nice
 * robbiew pretends he didn't read that
 * niemeyer => coffee break
<niemeyer> Okaaaay
<niemeyer> Globo (huge TV channel around here) just made my day by saying in the news that people should not click on any link that ends in a .php because it could be a _trap_!
<hallyn> niemeyer: save yourself by taking my floppy and clicking on the .exe that comes up.
<niemeyer> hallyn: :-)
<fwereade> niemeyer: they're still blind to the dangers of .asp then? this is a disaster waiting to happen!
<niemeyer> fwereade: Very true :-)
<niemeyer> bcsaller: Given that you'll be looking at the local dev/LXC stuff, would be nice to have your reviews on William's branch that are up
<bcsaller> niemeyer: I will
<niemeyer> bcsaller: Thanks
<niemeyer> Alright.. that's enough reviewing for me for the day..
<niemeyer> Have a good weekend all.. I'll step back and lay down for a moment
<jimbaker> niemeyer, take care!
<jcastro> m_3: does your postgres formula mostly work?
#ubuntu-ensemble 2011-07-30
<SpamapS> I believe mediawiki will work w/ postgres
<m_3> jcastro: when do you need it?
<jcastro> not anytime soon
<jcastro> ran into other problems before I even get to the db part
<jcastro> but making good progress
<m_3> jcastro: ok, well the basic db should be working with it... just no replication
<m_3> what are you hooking up?
<jcastro> summit, it's our scheduling thing for UDS
<jcastro> but it's a little hairy for me, need to work out the install part first
<jcastro> also, debug-hook is awesome
<m_3> yeah... ain't it?
<m_3> I'll add config and ports stuff and get the pg formula solid
<jcastro> no rush, summit isn't exactly set up to be deployed per se
<jcastro> so there'll be some things we need to fix
<m_3> cool... two words: cowboys + aliens!
<SpamapS> ahh.. bug triage is the perfect thing for in flight wifi :)
<SpamapS> jcastro: still, how exciting!
 * SpamapS signs off for landing
<_mup_> Bug #818491 was filed: test_connect_agent times out sometimes <Ensemble:New> < https://launchpad.net/bugs/818491 >
<daker> hello
<daker> i need some help
<daker> if i have i got a different a md5sum when downloading a tar.gz , what should i do ?
<Aram> redownload.
<daker> ok
<Aram2> hi, 'Couldn't find any package whose name or description matched "python-txzookeeper"'. How to do Ensemble development on 11.04? :-).
<Aram2> niemeyer: hi!
<niemeyer> Aram2: Hey Aram
<niemeyer> Aram2: How're things going there
<niemeyer> ?
<Aram2> Couldn't find any package whose name or description matched "python-txzookeeper": how to fix this on 11.04?
<Aram2> I'm trying to use the ensemble from bzr.
<Aram2> it's good.
<niemeyer> Aram2: We have a PPA with all the packages
<niemeyer> Aram2: sudo add-apt-repository ppa:ensemble/ppa
<Aram2> need to RTFM on this PPA thing.
<niemeyer> Alright, I'm off for some outsiding
<niemeyer> See y'all later!
<Aram2> o/
<daker> is there any progress on the postgres formula ?
<daker> m_3, what's the status of the postgres formula ?
<_mup_> Bug #818412 was filed: database name for relation too long <Ensemble:Confirmed> <mysql (Ensemble Formulas):Invalid> < https://launchpad.net/bugs/818412 >
<fwereade> so, it seems we're leaving quite a lot of things open after various tests... pipes, sockets, files, etc.
<fwereade> do we consider this a big enough deal to be a bug?
#ubuntu-ensemble 2011-07-31
<m_3> daker_: currently, the basic pg db is created/associated/working... however, still has totally open access rules and no replication
<_mup_> Bug #818873 was filed: test suite leaks fds <Ensemble:New> < https://launchpad.net/bugs/818873 >
<fwereade> if anyone's around, please confirm: we're treating merged-into-trunk as "fix released", right?
<jcastro> SpamapS: did you just make that chart to answer that question?
<daker> jcastro, where are you with the summit formula ?
<jcastro> I stopped at the part where summit asked for a django config question, but I know what to do to fix it
<jcastro> then that finishes the install, after that it's all database stuff
<daker> for now you can use the mysql formula
<jcastro> ok
<jcastro> I've not gotten that far, summit's kind of complicated, I should have picked something simpler, but the upstream authors are giving me enough pointers to keep it going
<_mup_> Bug #819009 was filed: confusing schema validation error when mandatory subkey is missing <Ensemble:New> < https://launchpad.net/bugs/819009 >
<_mup_> Bug #819019 was filed: ensemble bootstrap reports 'ssh authorized/public key not found'. <Ensemble:New> < https://launchpad.net/bugs/819019 >
