#ubuntu-ensemble 2011-08-01
<SpamapS> jcastro: hahahahah no I made that for the Cloud Days presentation like 3 months ago. ;)
<jcastro> SpamapS: I didn't know about it before
<jcastro> I am going to steal it
<SpamapS> jcastro: the one on the wiki is svg ... make sure to steal *that* :)
<jcastro> We should ask someone on design to prettyfy it
<SpamapS> yeah that would be cool
<SpamapS> I'm hesitant to make the marketing assets more polished than ensemble itself tho ;)
<jcastro> well, I see people are using it and reporting the annoyances
<jcastro> robert collins just filed a few
<niemeyer> Good morning all
<niemeyer> fwereade: Morning
<_mup_> ensemble/trunk r288 committed by gustavo@niemeyer.net
<_mup_> Merged fix-status-scope branch from Clint [r=bcsaller,fwereade]
<_mup_> Fixes ensemble status to work with multiple scope filters as per the
<_mup_> help documentation. Tests were also adapted to reflect the arguments
<_mup_> one should expect from argparse.
<kim0> Morning all
<niemeyer> kim0: Morning!
<kim0> niemeyer: hey .. oh you're up so early :)
<niemeyer> kim0: Yeah, I'm actually about to step back to try to sleep some more
<kim0> have some nice rest then
<niemeyer> Just couldn't sleep for a while, and decided to get up and do something useful
<niemeyer> Thanks,  see you soon :)
<kim0> :)
<fwereade> niemeyer: hey! totally missed you, don't expect people on IRC at this time :p
<fwereade> nice w/e?
<fwereade> and hey kim0 ;)
<kim0> fwereade: Hey Morning o/ :)
<raphink> I think one should expect people on IRC at about any time
<raphink> given there's people who live all around the globe (and even have internet access)
<fwereade> I have a recollection that "fix released" currently means "merged into trunk" rather than actually "released" -- can anyone confirm?
<fwereade> if so I can mark a couple of bugs "fix released", and tidy up kanban a little
<kim0> raphink: yeah pretty much :) 
<noodles775> Hi! Does anyone know if m_3 's psql branch is near ready for inclusion in principia ? https://bugs.launchpad.net/principia/+bug/803841
<_mup_> Bug #803841: Formula needed (postgresql) <new-formula> <Ensemble Formulas:In Progress by mark-mims> < https://launchpad.net/bugs/803841 >
<kim0> SpamapS: oh nice work on the graphic at http://askubuntu.com/questions/55179/what-is-the-purpose-of-the-bootstrapping-instance-ensemble :)
<kim0> wonder if that should be added to the bootstrapping section of the user-tutorial
<hazmat> g'morning
<fwereade> hazmat: heyhey
<hazmat> fwereade, how's your day so far?
 * hazmat tries to figure out if there's a way to see what the service-config settings are for a service
<fwereade> hazmat: ah, not too bad... I think I may have actually just run out of work for the moment, I was about to take a look at some reviews
<fwereade> once I get confirmation from andres that cobbler-launch-machine works for him I'll have anothe 3 branches to propose
<hazmat> fwereade, nice re cobbler, that's awesome
<fwereade> and you; nice weekend?
<fwereade> hazmat: there are a few more things we need to think about but I've been concentrating on getting parity with the spike branch
<hazmat> fwereade, quiet weekend, don't remember much of it ;-) spent some time cutting a computer case with with a rotary blade so it could fit some 5x3 drive enclosures, also picked up a 4g hotspot device till my new internet connection is setup (my old isp sucked and 'accidentally' dropped me when i made an unrelated request).
<hazmat> fwereade, do you have a local cobbler instance for testing or is it based on extraction from the spike w/ mocking?
<fwereade> my local cobbler is, er, partially working
<fwereade> most stuff I can test directly, but actually completing a netboot install of oneiric has had a long series of hilarious and bizarre problems
<fwereade> I might grid my lions and have another go shortly but it's a dispiriting prospect ;)
<fwereade> btw, hazmat, I was going to take a look at lp:~hazmat/ensemble/security-groups, but I'm a bit confused about its status
<fwereade> it claims to depend on lp:~hazmat/ensemble/security-connection... which lp thinks exists, but bzr doesn't
<hazmat> fwereade, it depends on lp:~hazmat/ensemble/security-connection-redux
<hazmat> fwereade, i'm not sure what happened to security-connection branch, but lp doesn't like it.. the revisions are the same though
<fwereade> hazmat: heh, ok, thanks :)
 * hazmat pokes at fwereade  fds testing work
 * fwereade wishes to make clear he has no idea what he's doing, and really appreciates fixes and explanations ;)
<hazmat> fwereade, i'm curious to see if we manually run the gc how the numbers change
<fwereade> hazmat: good idea
<hazmat> fwereade, alternatively to scanning all fds, it might have been easier to just scan the /proc/pid/fd
<hazmat> oh.. that's what's its doing
<hazmat> 'self'
<fwereade> hazmat: that's what I do, is there some code lying around still doing things the dumb way?
<fwereade> hey, I don't think I fixed ... er ... the branch it was originally flagged on, whatever it was
<hazmat> fwereade, yeah my mistake, looks fine, i hadn't seen the 'self' syntax before
<fwereade> (and it turns out I did fix lp:~fwereade/ensemble/storage-file-objects :))
<fwereade> hazmat: cool :)
<hazmat> one other issue, if  a test fails and had temp files/dirs they stick around post test run
<hazmat> not related to the branch, just a general test issue
<fwereade> hazmat: I view that as a feature, myself: when a test fails I really like to be able to look at the problematic temp files ;)
<hazmat> fwereade, so this is actually pointing out alot of failures in things that are okay
<hazmat> fwereade, the addCleanup calls from lib/mocker.py run after teardown
<hazmat> which take care of self.makeDir, self.makeFile afaicr
 * hazmat double checks
<fwereade> hazmat: gaah! nice catch :D
<hazmat> mocker isn't deferred aware so the ordering is a little odd at times
<fwereade> hazmat: I assumed teardown happened in tearDown, but you know what they say about assumptions :)
<fwereade> (they make an ass of u and mptions... hmm, doesn't quite work like that)
<hazmat> fwereade, i made a one line change to fix it against spurious tests, in the modified _run, if methodName=="tearDown": self.addCleanup(self._diff_fds)
<hazmat> spurious failures that is
<fwereade> hazmat: awesome, how much nicer does it come out?
<hazmat> fwereade, not catching normal operations, reduces the problem almost entirely afaics
<fwereade> hazmat: and I guess we're pretty safe from having further cleanups added at that point
<hazmat> fwereade, doing a full run now
<fwereade> hazmat, brb
<fwereade> b
<m_3> noodles775: currently the basic pg formula works.  authentication is wide open and there's no replication.
<hazmat> hmm, trial has an addCleanup that masks mocker's addCleanup 
<noodles775> k, thanks m_3. 
<m_3> noodles775: I should get the acls working before inclusion into principia
<RoAkSoAx> fwereade: howdy!!
<fwereade> RoAkSoAx: heyhey!
<fwereade> RoAkSoAx: nice weekend?
<noodles775> m_3: Yep, sounds sane :) I've subscribed to the bug so I'll know when I can try it. Thanks mor working on it!
<RoAkSoAx> fwereade: yeah!! busy though!! how was yours?
<fwereade> RoAkSoAx: nice, very peaceful, cath and laura were still in the UK for most of it
<m_3> noodles775: awesome, I'll keep it updated
<RoAkSoAx> fwereade: cool ;)
<RoAkSoAx> fwereade: anywas, what's your latest branch?
<fwereade> RoAkSoAx: hmm :)
<fwereade> RoAkSoAx: shadow-trunk is not yet updated with either lp:~fwereade/ensemble/cobbler-kill-machine or lp:~fwereade/ensemble/bootstrap-verify-storage
<fwereade> but maybe best to skip bootstrap-verify-storage for now: I think it's good, but it could interact with launch-machine (which is merged)
<fwereade> and I'd like to be sure of cobbler-launch-machine's status
<RoAkSoAx> fwereade: ok
<RoAkSoAx> fwereade: i'll work on top of shadow-trunk first then
<fwereade> RoAkSoAx: awesome
<fwereade> RoAkSoAx: if it looks healthy, I'll merge in 2 more and make 3 merge proposals
<RoAkSoAx> fwereade: awesome!
<fwereade> RoAkSoAx: it actually works?
<fwereade> hazmat: I'm sorry, but I'm really confused about your security-* pipeline: would you please tell me exactly what order I should be looking at them in? I'm driving myself slightly insane
<RoAkSoAx> fwereade: havent started yet
<RoAkSoAx> will let you know as soon as I do
<RoAkSoAx> fwereade: im updating all the cobbler-side pieces first
<fwereade> hazmat: don't worry, I think I've figured it out
<niemeyer> Good morning!
<niemeyer> Hmm.. funny to say that twice in the same day
<niemeyer> Aram, Aram2: Morning to both of you
<Aram2> hi :-).
<Aram2> I'm having some issues running the tests. http://pastebin.com/GPkiK7uZ
 * niemeyer looks
<Aram2> I assume it doesn't parse ~/.ensemble/environments.yaml ?
<niemeyer> Aram2: Hmmm
<niemeyer> Aram2: That's strange.. have you touched any code?
<Aram2> no.
<Aram2> latest rev from bzr.
<niemeyer> Aram2: Let me try to run this test
<niemeyer> Aram2: In theory, this test should be mocked
<niemeyer> Aram2: Sorry, I actually mean that this part of the logic should be mocked
<Aram2> I see.
<niemeyer> Aram2: Test passes here
<niemeyer> Hmm
 * niemeyer reads the test
<Aram2> how can I run only this tests and not all?
<niemeyer> Aram2: Just take that last string at the end of your paste and put it after ./test
<Aram2> also, I am using bzr branch lp:ensemble , I assume this is the branch I am interested about.
<Aram2> ah, ok.
<niemeyer> Aram2: It is indeed
<Aram2> yeah, test fails.
<Aram2> ensemble works though.
<niemeyer> Aram2: Phew.. that's actually good
<niemeyer> Aram2: If it passed it'd mean the failure would come from the interaction between tests, and would be harder to debug :-)
<niemeyer> Aram2: Hah
<niemeyer> Aram2: This test is actually broken, nevermind
<Aram2> heh.
<niemeyer> Aram2: It's just that pretty much all of us have the environment set up
<niemeyer> Aram2: The real env, I mean
<niemeyer> Aram2: Not ~/.ensemble
<Aram2> aha.
<niemeyer> Aram2: Many EC2 tools depend on the AWS_ACCESS_KEY_ID variable
<niemeyer> Aram2: and it's partner (AWS_SECRET_ACCESS_KEY)
<niemeyer> Aram2: Our tests should not depend on it to run, though
<Aram2> yeah...
<niemeyer> Aram2: For the moment, you can just disable this test
<_mup_> Bug #819329 was filed: Tests depend on AWS_ACCESS_KEY_ID being set <Ensemble:Confirmed> < https://launchpad.net/bugs/819329 >
<hazmat> niemeyer, good afternoon ;-)
<niemeyer> hazmat: Hey!
 * hazmat hands off review queue crown to fwereade 
 * fwereade peers at it, and tries to put it on his foot
<niemeyer> :-)
<jcastro> robbiew: heya do you have those 5 reasons narrowed down? I need them for a slide
<robbiew> yeah...but they aren't "blessed" yet
<robbiew> need to run them by sabdfl with jono
 * jcastro nods
<robbiew> they are in the Messaging whatever
<jcastro> ok
<robbiew> one of the bazillion ensemble google docs
<niemeyer> :)
<niemeyer> Aram2: How's it going overall, btw?  Have you started doing any coding, or still at the understanding phase?
<Aram2> Aram2: I have been busy a little bit... I have played with ensemble, read he current code and now I'm trying to make Go and Python work together nicely in some way.
<Aram2> it would be nice if you could call one from the other but unfortunately you can'
<Aram2> t :-).
<niemeyer> Aram2: Right, agreed
<niemeyer> Aram2: Note that for this experiment, you don't really have to make them play together
<niemeyer> Aram2: The question is more how it'd look like if some pieces were ported over
<Aram2> yeah.
<_mup_> ensemble/states-with-principals r302 committed by kapil.thangavelu@canonical.com
<_mup_> use constant for otp identity key in domain state dict, machine state tracks otp data reference.
<fwereade> niemeyer, hazmat: security-otp-principal
<fwereade> intended only to narrow the window of opportunity for a potential attacker, rather than anything stronger?
<hazmat> fwereade, i'm in progress on a reply on the mp, the nutshell is your assessment is correct, there is a window till the otp is consumed that an attacker can gain access to the persistent principal credentials if they can get ahold of the otp prinicipal/data enroute.
<fwereade> hazmat: thanks
<fwereade> is there any way to review "approve so long as you file a bug about this shortcoming when you merge it"? :p
<niemeyer> It would be mitigated by encryption, though
<fwereade> niemeyer: sorry, what's encrypted?
<niemeyer> fwereade: Nothing..
<niemeyer> fwereade: It would :)
<fwereade> niemeyer: ah :)
<niemeyer> For now this is already good progress on top of what we do now 
<hazmat> fwereade, atm i don't see a good way to manage that risk alternatively without a dedicated security agent.. we effectively create the structure with acls referencing the intended principal credentials within the cli which need to get past through at least one other process (which creates the intended connected client/agent) before they get to their destination
<fwereade> niemeyer: absolutely, I don't want to reject it, just to make sure that we track the fact that it's a potential issue
<niemeyer> fwereade: I know, I'm just explaining as well, rather than attempting to change your feeling about it :)
<hazmat> niemeyer, encryption of the otp data? the decrypt key needs to be passed as well
<fwereade> shall I file a bug then, and we can worry about it later?
<hazmat> niemeyer, fwereade i'm totally open to alternate ideas on this
<fwereade> hazmat, niemeyer: I think that at some stage in the future a security agent may be a good way around this (we could implement something that really did delete all unhashed records of the otp when it was first accessed) but for now I'm happy with it
<niemeyer> hazmat: No, encrypt of the channel used to communicate back with zk with the key
<niemeyer> hazmat: I think what you have is good progress already
<hazmat> niemeyer, fwereade its unclear to me if alternate solutions would work well without such an interception gap without implementing a new zk auth mechanism
<niemeyer> hazmat: We can easily hack ZK to implement real OTPs, or other mechanisms in the future
<hazmat> niemeyer, sounds good
<fwereade> cool, I'll approve and file
<niemeyer> fwereade: Agreed
<_mup_> Bug #819379 was filed: otp mechanism is vulnerable to interception <Ensemble:New> < https://launchpad.net/bugs/819379 >
<hazmat> fwereade, thanks for the reviews!
<fwereade> hazmat: a pleasure :)
<SpamapS> kim0: thats an older graphic, though updated with new understanding. :)
<robbiew> m_3: call time?
<m_3> robbiew: yup
 * hazmat enjoys googling for ensemble bugs
<niemeyer> Lunch time.. biab
<fwereade> need to pop out for a little while, bbs
 * niemeyer pops back
<RoAkSoAx> fwereade: so now the orchestra config should provide a storage-url?
<fwereade> RoAkSoAx: yes
<fwereade> it seemed foolish to assume it was *necessarily* on the same server as cobbler itself
<RoAkSoAx> fwereade: what's the formatting? http://W.X.Y.Z/abc?
<fwereade> RoAkSoAx: yep
<RoAkSoAx> fwereade: well kind of but given that the cobbler server was the "orchestra" server it made sense to keep it there
<fwereade> RoAkSoAx: fair point :)
<RoAkSoAx> fwereade: yes cause the idea is to make orchestra automatically configure the webdav server
<RoAkSoAx> fwereade: and have it up and running post installation and ready to serve ensemble
<RoAkSoAx> fwereade: let's do this, if no storage-url is provided, then assume a default, which would be the <orchestra-server/webdav>
<niemeyer> RoAkSoAx: Agreed, at least defaulting to it sounds like a good plan
<fwereade> RoAkSoAx: then we can trim that back :)
<fwereade> sure, happy to add that
<RoAkSoAx> niemeyer: indeed
<RoAkSoAx> fwereade: yeah! that will allow us more flexibility in case someone would like to chagne the storage somewhere else
<fwereade> RoAkSoAx: ...but still nothassle people by asking them to configure it until they need it
<fwereade> perfect
<fwereade> cheers :)
<RoAkSoAx> fwereade: indeed
<daker> m_3, what's the status of the postgres formula ?
<fwereade> I think it's the end of my day now; RoAkSoAx, I'll probably be back on later to merge that change through the plumbing and into shadow-trunk (and everywhere else it needs to be)
<RoAkSoAx> fwereade: btw.. AFAIK, a "name" on a cobbler system cannot be changed
<RoAkSoAx> fwereade: cool I'll continue my testing
<RoAkSoAx> fwereade: and the changes on setting the provider-state using the UID doesn't really seem relevant because , as far as I can see, to be able to edit a system, you need to obtain the name of the UID
<RoAkSoAx> so at the end it seems to be the same
<m_3> daker: in progress... eta few hours
<daker> ok
<m_3> daker: have machine-based acls working (pg_hba)
 * negronjl is away: out to lunch
 * niemeyer finds his way through the maze of branches creates by fwereade ;-)
<niemeyer> s/creates/created
<fwereade> RoAkSoAx: belatedly: I just changed a system name, to prove I could
<fwereade> RoAkSoAx: there's a rename button in the system list
<fwereade> RoAkSoAx: I think you *can't* change a name while something else is holding a reference to it though
<RoAkSoAx> fwereade: yeah maybe
<RoAkSoAx> fwereade: btw...
<RoAkSoAx> fwereade: the way how you obtaining the keys for clout-init user-data is broken
<RoAkSoAx> as it makes clout-init to fail
<fwereade> RoAkSoAx: bah :(
<fwereade> RoAkSoAx: I thought I was doing it just the same as in EC2, didn't expect *that* of all things to be a problem
<RoAkSoAx> fwereade: http://paste.ubuntu.com/656595/
<fwereade> goodness me
<SpamapS> negronjl: does redis support master<->master replication?
<fwereade> I guess it's travelling a very different path to the one it takes on EC2
<_mup_> ensemble/webdav-storage-prereq r318 committed by gustavo@niemeyer.net
<_mup_> Merged generic-state-ops.
<niemeyer> Please ignore this.. just a temp branch for review
<niemeyer> fwereade: Hmm
<fwereade> niemeyer: Hmm?
<fwereade> (sorry, irresistible symetry)
<niemeyer> fwereade: Sorry, just double checking
<_mup_> ensemble/webdav-storage-prereq r318 committed by gustavo@niemeyer.net
<_mup_> Merged generic-state-ops.
<fwereade> niemeyer: I'm fretting that I've done something unforgivably dumb now :/
<niemeyer> fwereade: No, I'm just a bit stuck on webdav-storage
<niemeyer> fwereade: I can't find the proper review base
<fwereade> niemeyer: is that one of the ones with 2 parents?
<niemeyer> fwereade: I get a 600 lines diff on it, including things I already reviewed
<fwereade> I seem to recall it requires cobbler-instance-ids as well
<niemeyer> fwereade: Yes, but I'm taking that into account
<niemeyer> fwereade: Yeah.. I'm merging both of these to form a base
<niemeyer> There's still more, apparently
<fwereade> niemeyer: hmm, not that I recall...
<fwereade> niemeyer: I'm being called, bbs, I'll see if I can see what I've done
<niemeyer> fwereade: Hmmm.. part of the diff is within one of the pre-req branches, actually
<niemeyer> fwereade: Maybe it's just the diff that is bogus..
<niemeyer> fwereade: Let me try something else
<niemeyer> fwereade: Alright, I think I got it
<niemeyer> fwereade: Don't move too fast.. I'll review it quickly before the diff disappears.. ;)
<fwereade> niemeyer: back, shout if I can help at all
<niemeyer> fwereade: Super, thanks
<niemeyer> robbiew: ping
<negronjl> SpamapS: no.  afaik redis only supports master-slave
<SpamapS> negronjl: I wonder if we can take the redis-master and redis-slave formulas and just make a 'redis' formula..
<negronjl> SpamapS:  i'll give that a shot
<SpamapS> negronjl: also can redis slaves be promoted to masters?
<SpamapS> negronjl: if thats the case (and not too complicated) then it might make sense to have them work in a ring.
<negronjl> SpamapS:  afaik.  yes.  still reading a bit about redis
<robbiew> jcastro: ping
<jcastro> robbiew: pong
<fwereade> ok, I think that really is it for me for now :)
<fwereade> nn all
<hazmat> niemeyer, the extra security work integrated into domain object creation (service, unit, machine) seems to have some impacts on total test run time, about 25-30% increases, for tests that heavily use the api.. the setup/teardown deltas are under 2%, its just the extra cost of manipulating the additional nodes afaics
<niemeyer> hazmat: Hmm
<niemeyer> hazmat: If you're comfortable the API is the right one, we can move forward with this no matter what
<niemeyer> hazmat: and worry about optimization down the road
<niemeyer> hazmat: We can do tricks like disabling the auth steps under controlled scenarios
<niemeyer> hazmat: But I'd be happier to run them all the time until we're sure they are good
<hazmat> niemeyer, yeah.. i tried seeing what i can do about optimization now, but i'm not seeing any quick gains, i've got most of the api encapsulated into a class, that i could switch toggle by test
<hazmat> which might get things a bit closer
<hazmat> niemeyer, sounds good re goodness
 * niemeyer steps out for watching a movie..
<_mup_> Bug #819562 was filed: ensemble.formula.tests.test_bundle fails on buildds <Ensemble:New> <ensemble (Ubuntu):New> < https://launchpad.net/bugs/819562 >
<_mup_> Bug #819563 was filed: ensemble.formula.tests.test_bundle.BundleTest.test_executable_extraction fails on buildds <Ensemble:New> <ensemble (Ubuntu):New> < https://launchpad.net/bugs/819563 >
#ubuntu-ensemble 2011-08-02
<pullies> hi, i'm trying out ensemble but when running `ensemble status` i get an Invalid SSH key message
<pullies> i have specified authorized-keys-path in my environment (that's what let me run bootstrap in the first place)
<pullies> this is when using the ppa, btw.
<pullies> any advice?
<niemeyer> pullies: Hey there
<niemeyer> pullies: Hmmm
<niemeyer> pullies: What's the content of the file you're pointing at with the authorized-keys-path?
<niemeyer> pullies: It usually gets that automatically from your ~/.ssh/id_dsa.pub or ~/.ssh/id_rsa.pub files
<niemeyer> pullies: Did you have these when you were seeing the error earlier?
<RoAkSoAx_> niemeyer: who creates id_{dsa|.pub
<RoAkSoAx_> niemeyer: who creates id_{dsa|rsa}.pub
<RoAkSoAx_> in the zookeeper?
<niemeyer> RoAkSoAx_: deploy does
<niemeyer> RoAkSoAx_: It serializes with the environment
<RoAkSoAx_> niemeyer: ok cool, cause I was encountering situations on which the zookeeper complaint about the keys but there were no keys
<RoAkSoAx_> niemeyer: err was looking for keys, but there were no keys
<niemeyer> RoAkSoAx_: Ok.. pullies was just reporting a similar issue above
<niemeyer> RoAkSoAx_: Maybe that's the problem
<niemeyer> pullies: Have you deployed something?  That could be the issue
<niemeyer> We should handle that bootstrapping phase better in that sense
<RoAkSoAx_> niemeyer: but in my cause, after bootstrapping the zookeeper dones't have any .pub keys created 
<hazmat> g'morning
<fwereade> heya hazmat :)
<pullies> sorry, i disappeared for the night, as you could probably tell.  ;-)  i have deployed nothing.
<pullies> -----BEGIN RSA PRIVATE KEY-----
<pullies> (a base64 encoded string which i won't paste here)
<pullies> -----END RSA PRIVATE KEY-----
 * hazmat has to give an ensemble presentation tonight, but his laptop does a kernel panic everytime it sleeps
<hazmat> sadness
<fwereade> did something about how we handle $PYTHONPATH change?
<hazmat> fwereade, no.. not in a very long time
<fwereade> hazmat: ok, I'm doing something dumb then :)
<hazmat> fwereade, there is a new bug open about pythonpath being set for hooks which it probably shouldn't be
<hazmat> fwereade, what's the sympton?
<fwereade> I set PYTHONPATH, run bin/ensemble, and it still picks up the system version
<hazmat> fwereade, a system version (ppa install) of ensemble?
<fwereade> hazmat: yep
<hazmat> fwereade, is it ppa or a manual installation via sudo setup.py install?
<fwereade> ppa
<hazmat> fwereade, i'd suggest first removing the package
<hazmat> PYTHONPATH=$PWD python -c "import ensemble; print ensemble"
<hazmat> is a quick verification of the import path for ensemble
<hazmat> er.. PYTHONPATH=$PWD:$PYTHONPATH  is probably better
<hazmat> for real usage
<fwereade> hazmat: that appears to work
<fwereade> hazmat: anyway, don't worry, all I really needed was verification that it we me being stupid ;)
<hazmat> fwereade, path problems hit all of us one time or another..
 * fwereade is reassured ;)
<hazmat> fwereade, btw, when your the second reviewer on a branch, you should adjust the merge proposal status (at the top) if both the reviews are approve, then approved, else work in progress. 
 * hazmat dog walks, bbiab
<fwereade> hazmat: cool, thanks
<fwereade> hazmat: I think I may have thought it happened by magic :p
<pullies> sorry, a little context would probably help.  last night i reported an error that `ensemble status` gives an Invalid SSH key message when i specify my authorized-keys-path in environment
<hazmat> pullies, so you have the path to your ssh key in 'authorized-keys-path' in the provider section of the environment? 
<hazmat> pullies, and yes the context helps ;-)
<pullies> hazmat, i have a path to my ssh key in environments.sample.authorized-keys-path
<hazmat> pullies, looking at the code, it looks like the path is a misnomer :-( it wants the name of a file in the .ssh directory
<pullies> AHA
<hazmat> pullies, if your up for it please file a bug that this is misleading/confusing regarding the name and usage of this setting
<pullies> hazmat, where's the bug tracker?
<pullies> launchpad?
<hazmat> http://launchpad.net/ensemble
<hazmat> pullies, your going to need to shutdown and bootstrap again, we need the ssh key active for doing any connection from the cli to the ensemble setup
<hazmat> pullies, so your saying you where able to bootstrap with an invalid key?
<hazmat> s/where/were
<hazmat> that's also a problem/bug
<hazmat> if so
<pullies> hazmat, running the instance was successful.  i believe connecting to it was not
<pullies> this launchpad dashboard is confusingly worded about openid
<pullies> :-)
<hazmat> pullies, suggestions welcome on #launchpad ;-)
<hazmat> pullies, thanks for filing a bug, i'm heading out for a few minutes, let us know how it goes
<pullies> still the same error message
<pullies> i'm assuming "the ssh directory" is ~/.ssh
<pullies> ?
<_mup_> Bug #819803 was filed: authorized-keys-path is actually a filename, not a path. <Ensemble:New> < https://launchpad.net/bugs/819803 >
<pullies> hazmat, it's possible that ssh access hasn't been enabled for the security group, poking around at ec2 docs.  can you confirm that this is a necessary precondition that ensemble doesn't take care of?
<hazmat> pullies, ensemble does indeed take care of ec2 security groups
<hazmat> pullies, it takes a minute or two for the instance to be up and responding
<hazmat> pullies, actually it does look like it will try either a full path or a name
<pullies> i've skipped the ensemble part.
<hazmat> pullies, it looks like you would get a LookupError
<pullies> i'm trying to use the keypair itself
<hazmat> not an invalid ssh key error, if it couldn't find the key
<pullies> to make sure i can login
<pullies> and i can't.
<pullies> ssh -i ~/.ssh/mykey.pem ubuntu@ec2-ip.compute-1.amazonaws.com
<pullies> that should succeed, yes?
<pullies> ssh -v is telling me that it reads the rsa private key
<pullies> "authentications that can continue: public key" is issued twice
<pullies> i'm a bit confused why i can't ssh directly in
<hazmat> pullies, the ssh key is specified for ensemble is the public key, not the private key
<hazmat> pullies, do you have an id_[dsa|rsa].pub in ~/.ssh ?
<pullies> amazon only gave me a .pem file to download
<hazmat> pullies, ensemble  doesn't use that
<hazmat> pullies, try ssh-keygen -t rsa
<pullies> will generate a local key and try that
<pullies> :-)  duh.
<pullies> thanks
<hazmat> pullies, you can remove the authorized-keys-path as well, ensemble picks up default keys automatically if none are specified
<fwereade> difference between orchestra and ec2: we can't easily tell whether an orchestra machine is running
<fwereade> so get_zookeeper_machines is problematic on orchestra
<fwereade> because it can't verify the sanity of the state it gets from FileStorage
<fwereade> in orchestra, should we be (say) trashing state in shutdown, or should we figure out some way to query the machines and thereby match ec2 better?
<hazmat> fwereade, orchestra doesn't know if machines it setup are running? 
<hazmat> i thought it was doing a tftp/dhcp setup, maybe that's not exposed via the api?
<fwereade> hazmat: nope, you can theoretically query power status
<fwereade> but that's acted weird for me
<fwereade> and that still doesn't tell us whether they're actually running, or if something went wrong part way through install (for example)
<hazmat> fwereade, it looks like the examples specify remote power management but not status
<fwereade> the api includes a "status" command, which AFAICT acts like an "off" command
<hazmat> fwereade, that seems like an upstream bug if that's the case
<hazmat> fwereade, not having any orchestra/cobbler experience, i'm not sure what the options are. but if the zk pointer file is invalid, the whole thing basically breaks down.
<niemeyer> Hey guys!
<fwereade> hazmat: I'm assuming for now that it's something I'm doing wrong, I tend to defer to RoAkSoAx_ for the final word on these things
<hazmat> niemeyer, top of the morning
<fwereade> heya niemeyer
<fwereade> hazmat: it's fine if provider-state is borked, on ec2, because we can check machine status
<fwereade> hazmat: we bootstrap if there's no state, or if the state is nonsensical => probably already shut down
<hazmat> fwereade, well on ec2 we always intersect the two provider state against machine status, becuase we need the ip resolution
<niemeyer> hazmat: balance to you my friend
<fwereade> hazmat: maybe that's the intent, I'm just telling you what I could infer from the code :)
<hazmat> fwereade, yup, indeed that's the case.. orchestra is a different beast a bit
<pullies> hazmat, now this is progress
<pullies> 2011-08-02 09:48:45,857 ERROR SSH forwarding error: Agent admitted failure to sign using the key. Permission denied (publickey).  
<niemeyer> hazmat, fwereade Got the conversation mid-way through, but it sounds sensible to trash state on shutdown
<niemeyer> I'd rather rename shutdown to destroy-environment, but that's another topic
<hazmat> pullies, if you modify/change the key, you'll need to ensemble shutdown && ensemble bootstrap
<pullies> this is after doing that
<pullies> and i've removed the path from environments.yaml
<niemeyer> pullies: Have you run deploy already?
<hazmat> pullies, so if you do ec2-describe-instances do you see the instance running (the security group should match the environment name prefixed with 'ensemble-')
<fwereade> niemeyer: cool, that feels like it would make life easier on my side and do no harm to ec2
<niemeyer> Well, I guess it doesn't really matter actually
<niemeyer> fwereade: Agreed
<hazmat> niemeyer, yeah.. failure to connect precludes deploy
<niemeyer> hazmat: I was thinking that the key is serialized with the env, which happens at deploy
<niemeyer> hazmat: But that's something else.. we need a key there to connect to zk in the first place
<hazmat> pullies, you should be able to ssh into the machine using  directly using ssh ubuntu@ec2-host-name 
<hazmat> niemeyer, the public key is sent at launch time via cloud-init
<hazmat> niemeyer, its not stored in zk
<pullies> hazmat, the dashboard shows the machine.
<niemeyer> hazmat: It is stored in zk during deploy
<hazmat> niemeyer, the environment is yes
<niemeyer> hazmat: Otherwise how would it be in cloud-init for the other machines
<niemeyer> hazmat: The keys
<hazmat> niemeyer, totally 
<hazmat> but not for the bootstrap
<niemeyer> hazmat: Yes, I guess that's what I said above?
<hazmat> yup
<hazmat> :-)
<hazmat> pullies, so what'd i'd like to verify is from the cli you can log into that machine via ssh, if you didn't rename the ssh key, it should just pick up the private side of the same default
<hazmat> ssh will try a few from what it finds in ~/.ssh
<pullies> i generated the key twice.  it's possible something is cached in either my client or theirs.  will have to log out and try again
<pullies> will attempt it tonight
<pullies> thanks for the help.  will definitely focus on the ssh portion, i don't think it's ensemble at this point
<niemeyer> statik: Morning
<statik> morning niemeyer
<RoAkSoAx_> fwereade: howdy!!
<fwereade> RoAkSoAx_: heyhey!
<fwereade> ow's it going?
<RoAkSoAx_> fwereade: pretty good, you?
<fwereade> RoAkSoAx_: pretty good thanks :)
<fwereade> RoAkSoAx_: and I got netboot 9% working on my cobbler, too
<fwereade> shadow-trunk is up to date, and might even work for you now ;)
<RoAkSoAx_> fwereade: cool, where are you stuck?
<fwereade> er, that should have been a 99% up there :)
<fwereade> the ubuntu-orchestra-client install
<RoAkSoAx_> fwereade: cool, I'm actually pulling your branch to test now
<RoAkSoAx_> fwereade: you mean the variable?
<RoAkSoAx_> on the preseed?
<fwereade> (1) it asks for an rsyslog server, and then cannot fails with "cannot stat /var/something/puppet"
<RoAkSoAx_> fwereade: show me the line in the preseed
<fwereade> RoAkSoAx_: yeah, I remember you telling me to "just comment it out for now" a while ago, so that's what I did
<fwereade> can't copy from VM, but it's the pkgsel one as copied from your mail
<RoAkSoAx_> fwereade: is the creation of the cloud-init data fixed?
<fwereade> RoAkSoAx_: I *think* so
<fwereade> RoAkSoAx_: I now generate something that actually looks like a working EC2 one
<RoAkSoAx_> fwereade: ok, gonna test now then ;)
<fwereade> RoAkSoAx_: the precise details of how I screwed it up the first time are far to embarrassing to relate :p
<fwereade> RoAkSoAx_: sweet, tyvm
<RoAkSoAx_> fwereade: hehe its all good
<fwereade> RoAkSoAx_: hm, I seem to be getting "204 No Content"s from webdav, which I wasn't before, but it all works (apart from the error, heh)
<RoAkSoAx_> fwereade: on the orchestra server, what's in /var/lib/webdav
<fwereade> RoAkSoAx_: the right stuff
<RoAkSoAx_> fwereade: formulas dir and provider-state?
<fwereade> RoAkSoAx_: no but yes (I haven't got a formulas dir at the moment, but the right content was written to provider-state)
<RoAkSoAx_> fwereade: mkdir -p /var/lib/webdav/formulas && chown -R www-data:www-data /var/lib/webdav/formulas
<RoAkSoAx_> fwereade: ok, so bootstrapping works, ensemble status doesn't
<fwereade> awesome! I haven't even thought about what status does, so that's the progress I wanted :)
<RoAkSoAx_> fwereade: ok, in orchestra means that was related to having @property def _machines:
<fwereade> RoAkSoAx_: indeed, and my understanding was that that was something you wanted to defer until the sprint
<RoAkSoAx_> fwereade: but anywa,s what's the last thing merged there and what was left to "separate" from the old bootstrap-orchestra branch?
<RoAkSoAx_> fwereade: i wanna have it working though
<fwereade> RoAkSoAx_: sounds good to me
<fwereade> RoAkSoAx_: were we going with "stick it into ks_meta" for now?
<fwereade> last thing merged into shadow-trunk is cobbler-launch-machine
<RoAkSoAx_> fwereade: so what;'s missing, the shutdown stuff?
<fwereade> cobbler-kill-machine is WIP
<fwereade> bootstrap-verify-storage is an unrelated bug I picked up lest I spin my wheels on monday, and that should be good soon
<RoAkSoAx_> fwereade: alright, so I;ll re-read your branch and try to identify what'[s missing from the things I wanted to do
<fwereade> RoAkSoAx_: I plan to make one more change -- to treat 204 as success (as I think is correct: processed successfully, doesn't feel it needs to return any content)
<RoAkSoAx_> fwereade: I haven't seen that, where did you see that?
<RoAkSoAx_> ort in what situation
<fwereade> I seem to be getting that every time I PUT provider-state to webdav
<RoAkSoAx_> fwereade: i haven't seen anything
<RoAkSoAx_> fwereade: make sure the formulas dir is there and restart apache2 and see if it keeps throwing that error
<RoAkSoAx_> fwereade: is the default storage-url also in?
<fwereade> well, it's not an error, it seems like a perfectly legitimate response: "yep, cool, I've handled your request and I have nothing more to tell you, but here's a fresh etag maybe"
<fwereade> but twisted getPage seems to consider "not 200" == "something happened, raise an exception"
<fwereade> I'll bounce apache anyway, but I think what'll fix it is deleting provider-state, I'll let you know in 5
<RoAkSoAx_> fwereade: nah nothing will delete provider-state
<RoAkSoAx_> fwereade: you'd have to do it manually in orther to be able to bootstrap again
<fwereade> RoAkSoAx_: what I'm doing is setting it to {} on shutdown
<fwereade> RoAkSoAx_: and, yes, if I overwrite I get 204, if I trash it manually I get 200
<fwereade> RoAkSoAx_: overwrite is perfectly reaonable behaviour, I'll just make sure ensemble understands that
<RoAkSoAx_> fwereade: i don't think we would need to delete provider-state on shutdown
<RoAkSoAx_> fwereade: remember that we are dealing with physical hw
<RoAkSoAx_> and it is expensive
<RoAkSoAx_> to be installing every time we want a zookeeper
<RoAkSoAx_> when we already have one
<fwereade> hm, I thought that ensemble shutdown was intended to wipe out the whole environment -- inverse of bootstrap
<RoAkSoAx> fwereade: that's one of the things I'm also planning to address.
<fwereade> that's what it seems to do on EC2 anyway :)
<RoAkSoAx> fwereade: yes, on ec2 is non-expensive because you can fire up instances or destroy them on demand
<fwereade> RoAkSoAx: heh, ok
<RoAkSoAx> fwereade: but in real hardware is not the same approach
<fwereade> RoAkSoAx: I've been working under the assumption that I should mirror ec2 behaviour as closely as possible
<fwereade> RoAkSoAx: at least for now ;)
<RoAkSoAx> fwereade: yes, but I think things like that
<RoAkSoAx> can be avoided for now
<RoAkSoAx> fwereade: I mean, wiping out provider-state is a super minor change
<RoAkSoAx> and I don't think it is necessary
<fwereade> RoAkSoAx: well, keeping a zookeeper around is quite a major difference, it seems to me :)
<fwereade> RoAkSoAx: well... it's very convenient for me :)
<RoAkSoAx> fwereade: indeed, but again, we are dealing with real hardware in this case
<RoAkSoAx> fwereade: sysadmins *wont* install zookeepers every week to deploy environments but rather, they would keep once zookeeper up and running at all times
<RoAkSoAx> fwereade: it is expensive in many ways, 1: downtime 2. network bandwidth wasted 3. hardware is useless 4. reinstallations at all times are expensive
<RoAkSoAx> fwereade: why this works on ec2? simply becuase i can fire up/destroy instances on demand and costs 2 cents?
<RoAkSoAx> fwereade: were there's a prebuilt image
<fwereade> RoAkSoAx: heh, got you, it's the system install cost not the zookeeper install cost (right?)
<RoAkSoAx> fwereade: yes
<fwereade> RoAkSoAx: ...but we still pay the system install cost for every other machine, right?
<RoAkSoAx> fwereade: right, *but* the idea is now to figure out a way of *re-using* the machines instead
<fwereade> RoAkSoAx: and if we have a local mirror it's not going to be *that* big a difference is it?
<fwereade> RoAkSoAx: ha -- I see :)
<fwereade> RoAkSoAx: that goal has escaped me
<fwereade> RoAkSoAx: sorry :)
<RoAkSoAx> fwereade: hehe but yeah having a mirror is still big difference when deploying a services
<RoAkSoAx> cause it still uses bandwdith
<RoAkSoAx> and multiplyed by lots of servers
<RoAkSoAx> it is huge
<RoAkSoAx> fwereade: but yes, that's another thing that I was gonna bring up during the sprint
<fwereade> RoAkSoAx: you make a lot of sense
 * RoAkSoAx better starts writing down all this stuff otherwise he'll forget :)
<fwereade> RoAkSoAx: it just doesn't precisely fit with what I'd understood our goals to be -- I thought we were aiming for parity with ec2 for now, and figuring out the tricky stuff at the sprint
 * fwereade would appreciate that :)
<RoAkSoAx> fwereade: yeah we can do that if you want too
<RoAkSoAx> fwereade: dealing with VM's is as inexpensive as ec2
<niemeyer> I seem to remember the wiki sent us to the right page after authenticating
<niemeyer> It doesn't look like that's the case anymore
<fwereade> RoAkSoAx: well, that's my justification for what I've been doing
<RoAkSoAx> fwereade: but right now, what I was persnally looking for is having it bootstrapping, deploying, etc, working (not really exactly the same as ec2, but close), so that during the sprint we could address these issues and differences with ec2
<fwereade> RoAkSoAx: I feel it's currently useful, towards that goal, even if things change as the plans firm up
<RoAkSoAx> fwereade: you don't need to justify as we didn't set any boundaries about stuff liuke these when we started
<fwereade> RoAkSoAx: that's my idea too, with the added condition of "on my local VM network"
<RoAkSoAx> fwereade: but my concern is that you might end up writing code that might be later dismissed :)
<fwereade> RoAkSoAx: deleting code is one of the great joys in life ;)
<RoAkSoAx> fwereade: hehehe alright
<RoAkSoAx> fwereade: again I don't mind you doing that, seriously, just giving you a broad view of what I have in my mind at the moment :)
<fwereade> RoAkSoAx: cool, I was worried I was going off into the weeds :)
<fwereade> RoAkSoAx: good to resync ;)
<RoAkSoAx> fwereade: nah.. either way, this things are gonna have to be discussed next week so my thoights my change given input from others
<RoAkSoAx> these*
<fwereade> RoAkSoAx: cool -- anyway, I'll handle 204s on .put() and propose launch-machine and bootstrap-verify-storage
<RoAkSoAx> fwereade: cool
<fwereade> RoAkSoAx: and that'll probably be my day, but I might be able to check in later when cath's gone to bed
<fwereade> RoAkSoAx: I'll make sure shadow-trunk is up to date with whatever I've proposed
<RoAkSoAx> i'll work on reviwing what would be missing from shadow-trunk in comparison to bootstrap's branch
<fwereade> fantastic
<niemeyer> <RoAkSoAx_> fwereade: i don't think we would need to delete provider-state on shutdown
<niemeyer> <RoAkSoAx_> fwereade: remember that we are dealing with physical hw
<niemeyer> <RoAkSoAx_> and it is expensive
<niemeyer> RoAkSoAx: destroy-environment should really destroy it..
<niemeyer> RoAkSoAx: I agree with you that physical hardware may make the admin act differently
<niemeyer> RoAkSoAx: E.g. not destroying the environment
<niemeyer> RoAkSoAx: It should be possible to terminate services and take them off the machines so that we can reuse not only the bootstrap machine but all of them
<fwereade> everyone: I need to be away sharpish, I'm afraid
<niemeyer> RoAkSoAx: But that's about _using_ the env
<niemeyer> RoAkSoAx: Without destroying it
<niemeyer> RoAkSoAx: Having ensemble destroy-environment not destroying it for reuse would be awkward
<fwereade> but I have a couple of new mps, and I would appreciate reviews from one and all, eithet on those or on their various prerequisites :)
<fwereade> enjoy your afternoons :)
<niemeyer> I'm stepping out as well, but for lunch.. biab
<RoAkSoAx> niemeyer: right, but from my point of view, destroy an environment should destroy everything, but leave the information from the zookeeper, so next time someone will like to bootstrap, it can detect "hey there's already a zookeeper, if it is sleeping, let's wake it up, if it is awake, let's use it"
<RoAkSoAx> niemeyer: and that way we save ourselves from reinstalling a machine again
<niemeyer> RoAkSoAx: zk is part of the environment
<niemeyer> RoAkSoAx: It's actually a key part of it
<niemeyer> RoAkSoAx: If one wants to save the time to redeploy zk, just don't destroy the environment
<niemeyer> RoAkSoAx: It's a "doctor, it hurts!" case :)
<_mup_> ensemble/states-with-principals r303 committed by kapil.thangavelu@canonical.com
<_mup_> statebase retry topology change respects change functions which yield control.
<niemeyer> bcsaller: How's it going with the local dev stuff?
<bcsaller> niemeyer: I'm working on trying to add flexability to how machine assignment is done in deploy/add_unit. Those both use state.service.assign_to_unassigned_machine which clearly isn't always what we want. 
<bcsaller> niemeyer: but specifying machines in deploy/add-unit is a little at odd with the co-location spec. Its a different axis to plot unit placement on 
<niemeyer> bcsaller: Don't worry about co-location for the moment..
<niemeyer> bcsaller: This is really a different angle of the problem
<bcsaller> just keeping it in mind
<niemeyer> bcsaller: Cool, that's nice
<niemeyer> bcsaller: Hmm.. but we do have specific assignment, rigth?
<niemeyer> bcsaller: assign_to_unassigned is just one method we have
<bcsaller> its the only one used
<bcsaller> in the cli
<bcsaller> so really it becomes about providing access to other means for placement (as a starting point)
<bcsaller> I know there is a desire down the road to say things like `ensemble add-unit -n <num> service`
<bcsaller> but if deploy and add-unit grow syntax to support machine assignment I want it to be future friendly 
<niemeyer> bcsaller: Have you seen assign_to_machine?
<bcsaller> yes
<bcsaller> niemeyer: I think the issue comes in at the cli level to be clear
<niemeyer> bcsaller: That's why I don't get the problem you're describing.. sure, we have assign_to_unassigned_mchine, which is the hard one..
<niemeyer> bcsaller: We also have an explicit one
<niemeyer> bcsaller: Which is easy to use
<bcsaller> niemeyer: its literally an issue of cli syntax I'm talking about, not a coding hurdle 
<niemeyer> bcsaller: Ahh, ok
<bcsaller> I don't want to blindly add new syntax that isn't friendly to the other efforts we have in mind 
<niemeyer> bcsaller: 100% with you
<niemeyer> bcsaller: Hmmm
<niemeyer> bcsaller: Here is an idea..
<SpamapS> How is this at all relevant to local dev?
<SpamapS> There's only one machine in local dev.
<niemeyer> bcsaller: Let's introduce a command named "set-devel-flag"
<niemeyer> SpamapS: Let's cover this in a moment..
<SpamapS> Which would be "available" because it can add containers.
<niemeyer> bcsaller: Or even better, "set-devel"
<bcsaller> SpamapS: thats an important part of the change, but now the cli tools only look for unassigned machines so its a little more pervasive 
<niemeyer> bcsaller: Takes a json blob
<niemeyer> bcsaller: and stores it in zookeeper, within the topology in a "devel" key
<SpamapS> So to me, the current way, "find me an available machine" should just find you machine 0 .. your local machine. For EC2, since they can't do containers, they are unavailable as soon as they have 1 thing on them.
<niemeyer> bcsaller: So we can experiment with different settings
<niemeyer> SpamapS: Don't worry about it.. we're just splitting development in logical steps
<niemeyer> SpamapS: We'll eventually give you the feature you want.
<niemeyer> bcsaller: Or maybe it should really be "set-flag"
<niemeyer> bcsaller: So that we can use that later
<niemeyer> bcsaller: (rather than being specific to "development")
<niemeyer> bcsaller: This way you can create an alternative path within the logic by consulting specific flags
<niemeyer> bcsaller: Without altering the standard operation
<niemeyer> bcsaller: Thoughts?
<bcsaller> niemeyer: we could easily add arguments to deploy/add-unit that were conceptually --placement <strategy_or_plan> where it could be a machine id or the name of an available planner which could choose local, reuse, etc
<bcsaller> as a counter proposal 
<niemeyer> bcsaller: Yes, we could, .. we'd also have to worry about getting it right.. you already spent a day thinking about it and didn't get to a good plan, so my suggestion is to get unblocked and
<bcsaller> std ops through the code paths would all have to check those flags, which is fine, we want something like that anyway
<niemeyer> bcsaller: have the actual goal in mind for the moment.. we can worry about neat placement strategies down the road
<niemeyer> bcsaller: The problem we have at hand right now doesn't depend ont his
<bcsaller> I don't need to build those now, that wasn't the point
<niemeyer> bcsaller: That's my point! ;-0
<bcsaller> +1
<bcsaller> I find that syntax better than talking about setting development flags in a json bucket, but under the hook it will play out much the same from the internals of those tools
<niemeyer> bcsaller: So every time you do deploy wordpress/mysql/etc, you'll have --placement ?
<cole> roadmap question:  I get that ~/.ensemble/environments.yaml can be very easily modified to scale an app.  is there a framework for allowing this to be done based on some performance threshold? like memory consumed or cpu utilization / overall cluster throughput etcâ¦ ??
<niemeyer> bcsaller: In local development there can't be anything besides --placement=local
<niemeyer> bcsaller: So where do you store the fact placement _has_ to be local?
<bcsaller> niemeyer: it would just default to doing with it does, "unassigned" which points to the existing method 
<bcsaller> `local` is a method that says return machine<0>
<niemeyer> bcsaller: Ok.. that sounds good as well.. can you please describe the semantics end-to-end?
<niemeyer> cole: We'll be with you in a sec
<bcsaller> `ensemble deploy --placement local mysql`
<bcsaller> `ensemble deploy --placement local wordpress`
<bcsaller> would place two units and assigned them to the machine returned by the policy, in this case machine 0 which is the local box
<bcsaller> internally this would replace the code in deploy and add unit that maps/find machines and does unit assignment with a callout to policy by name. If that option is an int, the machine is is resolved and used with a different policy function doing specific assignment 
<bcsaller> add-unit -n <num> --placement xxx could still be strange, with a policy it could work, with a machine id... ?
<bcsaller> but that doesn't seem to be a blocker to me 
<hazmat> niemeyer, bcsaller unrelated to current discussion, i was looking over the co-location stuff on the ML, and was wondering if this isn't easier with the relation qualification co-located or a new relation type container, the distinguishing characteristic is that the physical placement, its odd indeed for a local co-located service to talk to an opposite end remote service. its more of a local either p2p relations between those units deployed in the s
<hazmat> ame container, or a bus/ring container relation containing only the local units.
<hazmat> bcsaller, that sounds good if default placement policy can derive from provider
<niemeyer> bcsaller: We don't have to address specific assignment for the moment
<hazmat> thus obviating the need for specifying it in hte common case
<niemeyer> bcsaller: I want to avoid the "I want this in machine X" feature for now
<niemeyer> bcsaller: Because it blocks other characteristics we're intrested on
<bcsaller> niemeyer: I prefer that as well 
<niemeyer> hazmat: Sorry, I'll be with you soon.. let me unroll the stack of questions
<niemeyer> bcsaller: Ok
<bcsaller> niemeyer: a couple of named policies that map cli stuff to the service assignment code then is pretty simple and seems future aware
<niemeyer> bcsaller: So --placement local sounds fine to bootstrap.. the local provider can somehow determine the default policy down the road
<bcsaller> hazmat: it makes total sense that providers can carry code for specific policies 
<hazmat> cole, its definitely something we're thinking about, but its probably a ways out, we're currently working out how to get things like default service monitoring onto systems. in future with monitoring, and a remote api for ensemble, a user could provide scaling logic, its probably a while till ensemble provides it as a generic feature.
<hazmat> bcsaller, not that they should per se have code, ideally it could be generic, just that they specify a default named policy
<niemeyer> bcsaller: The point was more that we need to tweak default policy according to backend
<bcsaller> ok
<niemeyer> bcsaller: We don't want --placement switches on every single call on a local dev
<cole> hazmat: thanks!  I figured as much.  I think we might be able to help in that area.  project looks like it's coming along nicely!
<bcsaller> right
<bcsaller> got it
<niemeyer> bcsaller: But I see your overall plan, it's a good idea, +1
<bcsaller> cool, I can work on a branch for that today, sounds pretty simple
<hazmat> cole, fwiw, as is though ensemble cli already enables the ability to scale a service and automatically reconfigure clusters for the additional capacity, just not as the automated scaling bit in response to service conditions.
<niemeyer> SpamapS: So..
<cole> hzmat: yep, got it!
<niemeyer> SpamapS: The way the work is being structured is this:
<niemeyer> <niemeyer> 1) Make multiple units work on a single machine across the board (no LXC)
<niemeyer> <niemeyer> 2) Make local deployments work with one or multiple units (no LXC)
<niemeyer> <niemeyer> 3) Make LXC work to deploy units locally (doesn't matter if EC2 can't do it yet)
<niemeyer> SpamapS: bcsaller is working on step (1) still (he started yesterday :-)
<SpamapS> Cool, I had a branch that did 1 with --machine $machine_id .. tho it was failing tests last I checked.
<niemeyer> SpamapS: That's exactly the context of the conversation.. I don't want to nail the problem of specific assignment for the moment.. there are other approaches we can take for that (resource interest, service proximity, etc), and it's really unrelated to the core problem we're solving for local development
<niemeyer> SpamapS: So I had one suggestion, and bcsaller has a better suggestion which we'll go down with.. --placement local..
<niemeyer> SpamapS: This is a trivial bootstrap process that keeps the complex problems for latter
<niemeyer> later
<bcsaller> SpamapS: where using a local provider would change the default placement policy for you
<SpamapS> ACK
<hazmat> niemeyer, although placement considering the service to be deployed (resource interest, service proximity)  will need to receive it as part of the placement api
<niemeyer> hazmat: Ok, re. co-location.. I agree the flag on the relation is probably all we need
<niemeyer> hazmat: I don't see it as being special, though
<niemeyer> hazmat: These relations still need well defined interfaces
<hazmat> niemeyer, yeah.. well its not clear that a local co-located service needs to have any access to the remote units
<niemeyer> hazmat: They don't _have_ to
<hazmat> er. its opposite end
<niemeyer> hazmat: But they should be _able_ to
<niemeyer> hazmat: re. the placement point above, yes, I'm not trying to define how that's going to work right no
<niemeyer> w
<niemeyer> hazmat: Was rather just mentioning there are additional things we'll want to talk about and understand when sorting this actual issue
<niemeyer> hazmat: The problem we have at hand right now is much simpler, though
<hazmat> indeed
<hazmat> okay.. i did some reviews and security work today, switching tracks i'm going to do a presentation tonight at a local python user group, going to prep for that
<niemeyer> hazmat: Cool.. I'll switch to reviews.. is there something blocking you on that front?
<niemeyer> I'd like to sort all of William's branches today, hopefully
<hazmat> niemeyer, nope.. i've just been going through william's branches.. on the security front, the integration work is coming along, i've reworked the interfaces a few times, most recently to enable us to turn off security by default for tests (default for now is enabled), still a little bit of refactoring to do on the policy.. i'm trying to finish the end to end so i can fix up policy-rules branch based on better knowledge of its application.
<niemeyer> hazmat: Cool
<niemeyer> Huge wind storm here today
<jcastro> how do you move between VTs in the tmux thing when you're in debug mode?
<hazmat> jcastro, ctrl-a
<hazmat> is the escape sequence, tmux config in debug-mode is setup to emulate screen
<jcastro> ah, been spoiled by byobu I guess, heh
 * jcastro finishes up his ensemble screencast
<niemeyer> jcastro: We hope to use byobu again at some point
<jcastro> easy to forget how spoiled I was
<niemeyer> jcastro: kirkland is working on a set of configs for tmux, and hopefully we can also bring screen back in the future
<hazmat> niemeyer, any progress on the repo work?
<hazmat> just using the principia-tools to setup a demo.. and thinking ick
<niemeyer> hazmat: None..
<niemeyer> hazmat: Stuck on reviews, interviews, conversations, etc
<niemeyer> hazmat: Hoping to get to it this week still
<SpamapS> Ok I just uploaded txzookeeper 0.8.0 to oneiric.. and will upload trunk shortly as well.
<SpamapS> hazmat: If there's anything minor I can do to make principia less "ick" .. let me know. I've tried to make it a little better of late. :-P
<SpamapS> hazmat: don't want to spend much time on it though.. :)
<hazmat> SpamapS, i appreciate the work on it, just wishing for a repository to obviate the need for additional tools to deploy
<SpamapS> hazmat: exactly
<SpamapS> hazmat: I'd like a better repo too.. principia is.. well a nice experiment. :()
<SpamapS> hazmat: note that there's a 'princpia update' command now.. which pulls a new list of formulas
<SpamapS> hazmat: and some of the commands have --help
<jcastro> SpamapS: what!
<jcastro> where?
<hazmat> SpamapS, if i had to capture in one line the three things to make the dev story better.. it would be "local dev, no formula revs, pre-allocate machines"
<SpamapS> jcastro: in the ppa
<jcastro> oh man
<SpamapS> jcastro: sudo apt-get install principia-tools
<jcastro> I totally missed that
<hazmat> SpamapS, getall by itself seems to do the trick of updating (mr seems to do it)
<jcastro> also, check it out: http://www.youtube.com/watch?v=4Rl7wTlUqkY
<SpamapS> hazmat: getall calls update :)
<hazmat> well of grabbing new formulas
<hazmat> nice :-)
<hazmat> SpamapS, was that a good summation of things? or are there others that get top billing?
<SpamapS> hazmat: yeah definitely.. though I have to say, the formula dev story is already pretty damn good.. our standards just keep going up. :)
<hazmat> SpamapS, indeed, but precious seconds get lost, and turned into minutes.. we keep getting busier ;-)
<hazmat> i think i figured out a quick way to pre-allocate machines, but the allocation doesn't take place till the first formula is deployed
<hazmat> which is kinda of a bummer, its more like a delayed pre-allocation
<SpamapS> yeah I think it actually makes sense to enable it as its own command
<hazmat> SpamapS, like add-machines 5 ?
<SpamapS> ensemble bootstrap && ensemble allocate-machines --ec2.instance-type=m1.small 10
<SpamapS> It would be cool to have every aspect of the environment available as --env.x=foo
<SpamapS> Would solve a lot of the "need a way to specify X at runtime"
<SpamapS> jcastro: cool video
<hazmat> hmmm.. that sounds good re allocate-machines.. the env.x syntax is likely problematic.. its kinda of redundant in that the cli is already targeting a env for any op, so the qualification is odd
<SpamapS> hazmat: its to prevent namespace collision
<SpamapS> hazmat: doesn't have to be ec2. .. could be --envset instance-type=m1.small
<SpamapS> hazmat: or just bury it in the positional args
<SpamapS> hazmat: just seems like a good idea to be able to override settings at runtime thats all
<hazmat> SpamapS, ic.. i was thinking just allocate-machines --provider-size=m1.small  4
<niemeyer> Ugh.. almost 8
<niemeyer> Time flew by today
<niemeyer> ALRIGHT!
<niemeyer> We have an almost empty review queue!
<niemeyer> It's been a while..
<niemeyer> But!
<niemeyer> We still need  a hand on this one:
<niemeyer> https://code.launchpad.net/~fwereade/ensemble/webdav-storage
<niemeyer> It lacks a second review
<niemeyer> Any takers?
<_mup_> Bug #820107 was filed: Ensemble should enable flexible unit placement <Ensemble:In Progress by bcsaller> < https://launchpad.net/bugs/820107 >
#ubuntu-ensemble 2011-08-03
<niemeyer> Alright.. that was a long day
<niemeyer> Have a good night all
<niemeyer> Or a good day, I guess
 * niemeyer => bed
<niemeyer> Good morning lads
<fwereade> niemeyer: heyhey, didn't see you there :)
<niemeyer> fwereade: Hey man!
<fwereade> niemeyer: thanks for the reviews, much appreciated
<niemeyer> fwereade: No worries, I hope it all makes sense
<fwereade> niemeyer: yeah, some excellent points
<fwereade> niemeyer: I'm trying to figure out the right shape for the zookeeper-finding stuff, and nothing's really making me happy so far, but I'll find something
<niemeyer> fwereade: Cool, please let me know if you want to brainstorm on something there
<hazmat> g'monring
<hazmat> morning
<hazmat> ensemble talk at local python user group went really well about 25 folks on hand
<niemeyer> hazmat: Morning!
<niemeyer> hazmat: Oh.. good interest?
<hazmat> niemeyer, indeed, several folks doing cloud based deployments, almost 10 had used puppet/chef in the past... i was surprised how many python old-timers where there. had some good talks with jim fulton (cto zope corp) and some others afterwards
<fwereade> hazmat: cool :)
<niemeyer> hazmat: Wow, sweet
<niemeyer> hazmat: Was that a Linux user group, or a Pythonn one?
<niemeyer> hazmat: Or a * user group? :)
<hazmat> still trying to track one of the attendees down, i want to talk to some more, he had alot of sysadmin chops on 1k scale deployments.
<hazmat> niemeyer, python user group, by old-timer i meant 10+ years of python.
<niemeyer> hazmat: Sweet
<niemeyer> hazmat: Yeah, would be good to have that kind of contact
<hazmat> lots of macbook pros in the room.. but i said.. ubuntu is only a vm away ;-)
<hazmat> although i guess we have a macports install for ensemble, albeit frozen
<fwereade> niemeyer: btw, I think I have a functiony approach that works quite well, but naming is a bit of an issue; I've gone with find_zookeepers_core (that takes a provider and a callback that should turn an instance id into a machine, or raise the appropriate exception)
<fwereade> but "_core" doesn't feel quite right
<fwereade> "_base" is too overloaded IMO
<fwereade> and just "find_zookeepers" forces me into "as" imports
<fwereade> which suck
<fwereade> heh, as soon as I speak in public I think of something better
<niemeyer> fwereade: Aha, I see
<niemeyer> fwereade: Change the other ones, then
<niemeyer> fwereade: find_orchestra_zookeepers
<niemeyer> fwereade: Which will take less arguments, I suppose
<fwereade> niemeyer: I guess, but then we have the namespace repetition
<fwereade> niemeyer: ...and I now realise that there's an uncomfortable overlap between what I have and some other ec2 code
<fwereade> niemeyer: I think it was always there, but it's just become obvious
<fwereade> niemeyer: more thinking required 
<fwereade> incidentally, any objections to introducing MachineNotFound and MachinenotReady errors?
<hazmat> niemeyer, is there an equivalent of virtualenv for goinstall ?
<niemeyer> hazmat: There's no need to..
<niemeyer> hazmat: It won't install anything on your system
<niemeyer> hazmat: Just define something like "export GOPATH=~/gopath"
<niemeyer> hazmat: Everything will be there
<niemeyer> fwereade: Agreed regarding repetition.. but there's no way to avoid it, one way or another
<niemeyer> fwereade: E.g.
<niemeyer> fwereade: We can also call the other side "find_zookeepers_common"
<niemeyer> fwereade: Instead of "_core"..
<niemeyer> fwereade: Which is repeating the "common" namespace
<fwereade> niemeyer: well, my thought was that actually the only provider-specific bit is a function that takes an instance id and returns a machine or raises an error
<niemeyer> fwereade: So either: 1) We rename on import, or 2) We add the namespace so we can use both in the same context
<niemeyer> fwereade: 2) sounds better
<niemeyer> fwereade: I have no preference in terms of which side to rename
<fwereade> niemeyer: but I've realised there's actually no need to have more than one find_zoookeepers function anyway
<niemeyer> fwereade: find_zookeepers_common or find_orchestra_zookeepers sounds ok
<niemeyer> fwereade: How so?
<fwereade> niemeyer: the only difference between ec2 and orchestra is the callback which determines whether an instance id is good enough to turn into a machine and return
<fwereade> niemeyer: so all I need to do is come up with a good name for *that*, and somehow deal with the fact that the code is very similar to existing code that turns instance_ids into machines
<niemeyer> fwereade: Hah, neat
<niemeyer> fwereade: Woohay composition
<niemeyer> :-)
<fwereade> niemeyer: yeah, it's rather pleasing
<niemeyer> fwereade: Testing also gets a lot easier
<fwereade> niemeyer: ...potentially :)
<fwereade> niemeyer: I've noticed at least one place where we do just mock out our own code, and trust that the result of composing verified pieces is itself effectively verified, but I understood you had some pretty serious reservations with that approach
<fwereade> niemeyer: that statement makes me think I may not have fully grasped your position
<fwereade> niemeyer: certainly, I can test the common one very nicely, but I'm not sure what level of testing you'd prefer for the actual provider implementations
<niemeyer> fwereade: It's more likely that the code went in unnoticed.. I'm pretty sure you understand my position well
<fwereade> niemeyer: hah, ok then :)
<fwereade> niemeyer: you'll be pleased to know it is verified separately as well, as I recall
<fwereade> niemeyer: but I'll make a note to kick it about a bit next time I notice it
<niemeyer> fwereade: I've seen such "that's how it should be" cross-mocking test cases pass gracefully with an implementation that exploded in practice
<fwereade> niemeyer: likewise ;)
<fwereade> niemeyer: I have fuzzy ideas about a fruitful middle ground, but it's a total derail now, maybe I'll get around to writing them up properly at some point though
<niemeyer> fwereade: mocker has means to avoid some of the pain, but it's still very far from perfect
<niemeyer> fwereade: (it actually checks interfaces) 
<fwereade> niemeyer: I saw that, and liked it
<fwereade> niemeyer: but indeed, no silver bullet :)
<niemeyer> fwereade: Yeah, "takes the same number of arguments" is far from a valid guarantee
<fwereade> please confirm: all I need to do to get cobbler-find-zookeepers back in the queue is to mark it "needs review" again
<niemeyer> fwereade: That's right
<fwereade> niemeyer: cheers
<fwereade> niemeyer: cobbler-launch-machine review [7]
<fwereade> Why are we "fixing" a dictionary in-flight rather than producing
<fwereade> the right one?
<fwereade> ...please expand
<fwereade> is it just that you'd rather build the dictionary up from scratch?
<niemeyer> fwereade: No.. passing untyped blobs of data is already not ideal by nature
<fwereade> niemeyer: certainly
<niemeyer> fwereade: The logic there goes one step further by moving keys around within the dicionary
<niemeyer> dictionary
<niemeyer> fwereade: Why is it doing so?
<fwereade> niemeyer: because the expected format of machine_data doesn't match the expected format of the args to format_cloud_ini
<fwereade> niemeyer: I seem to recall the ec2 provider mungs it a bit as well
<niemeyer> fwereade: Why?
<niemeyer> fwereade: It's clear that we do it.. what's not clear is why we do it
<fwereade> niemeyer: ha :)
<fwereade> niemeyer: I have no idea
<niemeyer> fwereade: So let's not do it
<fwereade> niemeyer: sounds good :)
<fwereade> niemeyer: otherwise, re gzipping the data
<fwereade> niemeyer: you just don't feel there's any call to go gzipping things until we're using a lot more space?
<fwereade> niemeyer: or is there some other consideration?
<niemeyer> fwereade: No, that's all really.. it feels like added complexity without benefit
<niemeyer> fwereade: I suspect the ungzipped content should fit within a single ip packet.. given that and how rare the event is, it sounds unnecessary
<fwereade> niemeyer: yep, and we can always reinstate it if it becomes a problem :).
<fwereade> niemeyer: I think I know what I'm doing now then :)
<niemeyer> fwereade: It won't become a problem.. who would need more than 640k of memory
<fwereade> niemeyer: :D
<niemeyer> ;-)
<fwereade> niemeyer: incidentally: there's no way to fix the incorrect prereq without resubmitting the mp, is there?
<niemeyer> fwereade: I thought it was just a matter of editing it.. but I've never tried to be honest
<fwereade> niemeyer: I see no editability
<fwereade> niemeyer: not to worry, resubmissions have previous-version links IIRC
<RoAkSoAx> fwereade: howdy!!
<fwereade> RoAkSoAx: heyhey
<fwereade> RoAkSoAx: how's life?
<RoAkSoAx> fwereade: good good, how about yours?
<fwereade> RoAkSoAx: yeah, I just had a nice "I'm being paid to sit at home and write code!" moment :)
<fwereade> RoAkSoAx: and stuff's making its way into trunk
<RoAkSoAx> fwereade: hehehe cool for both
<RoAkSoAx> fwereade: did you receive my email?
<fwereade> RoAkSoAx: I saw; I probably won't get onto integrating them today I'm afraid
<fwereade> RoAkSoAx: didn't see what branch they were in though...
<RoAkSoAx> fwereade: that's fine, I just have a few things to fix first
<RoAkSoAx> fwereade: haven't put them yet, i'm preparing it now
<fwereade> RoAkSoAx: ah cool
<RoAkSoAx> fwereade: but this is the result: http://pastebin.ubuntu.com/657941/
<fwereade> RoAkSoAx: sweeet :D
<RoAkSoAx> fwereade: ok so take a look mat this first, and then we can discuss it as there's a few things to discuss: http://paste.ubuntu.com/657943/
<fwereade> RoAkSoAx: ok, just need a quick break before conversation, back in a mo
<RoAkSoAx> fwereade: sure I'm ghonna prepare some tea ;)
<fwereade> RoAkSoAx: ready when you are
<RoAkSoAx> fwereade: do you have any thoughts about it first?
<fwereade> RoAkSoAx: looks broadly sensible, I think, I've got my eye on the existing structure in ec2 for possible rearrangement, but it's really helpful that it matches closely
<RoAkSoAx> fwereade: its pretty much exactly the same as the ec2 classes but with minor changes
<fwereade> RoAkSoAx: yep, exactly
<RoAkSoAx> fwereade: 1. in cobbler.py I've added describe_systems (instead of using describe_system). I've done this to be more closely related to describe_instances from ec2. If we pass the instance_id it will jsut return the system for that instance id, in a *list*
<RoAkSoAx> fwereade: or return a list of systems if instance_id not specified
<fwereade> RoAkSoAx: I saw that, I'll probably change it to take (self, *instance_ids) to match more closely
<fwereade> RoAkSoAx: I'm a little unsure about whether it's directly analogous though
<RoAkSoAx> fwereade: note that I did this because *describe_instance* is used in other places and it doesn't assume that is it returns list, but rather just the info of a system
<fwereade> RoAkSoAx: yep, that's cool
<fwereade> RoAkSoAx: I'm just a little uncertain about the differences between what that returns and ec2 returns
<RoAkSoAx> fwereade: but the reason why I did it is because we require to either return the info of an specified system, or the info of all the systems
<RoAkSoAx> fwereade: yes, that's also what I want to discuss 
<fwereade> RoAkSoAx: it may be one of hose fundamental differences
<RoAkSoAx> fwereade: indeed. so looking at iterate.py if you notice there's a fixme in line 120 of the diff I sent
<RoAkSoAx> fwereade: in nthe commented part we are handling stuff such as "instance.dns_name" (line 128), while the way the systems are returned from cobbler, we cannot use it like that but rather something like "instance["name"]"
<fwereade> RoAkSoAx: but it might be worth filtering on available mgmt_classes
<fwereade> RoAkSoAx: I think those classes are expecting to work with Machine instances rather than system names
<fwereade> RoAkSoAx: oops misunderstood
<fwereade> RoAkSoAx: my internal definition of "describe_system" was "get everything you know", rather than "give me an identifier"
<RoAkSoAx> fwereade: which I believe is the right thing to do for describe_systems. My only concern is why on ec2 it uses stuff such asnstance.dns_name or instance.instance_state in that particular function
<fwereade> RoAkSoAx: just because that's what we get back from txaws
<RoAkSoAx> fwereade: cause in our case, we would have to use instance["what-ever-field-on-dictionary"]
<RoAkSoAx> fwereade: ah ok, so that answers my questions then
<fwereade> RoAkSoAx: I feel that comes under the category of "acceptable difference"
<fwereade> RoAkSoAx: cool
<RoAkSoAx> fwereade: indeed.
<RoAkSoAx> fwereade: one more thing. in cobbler.py lines such as: info = yield self._call("get_system", (name,))
<RoAkSoAx> fwereade: that call get_systems calls what?
<fwereade> RoAkSoAx: sorry, can't parse
<fwereade> RoAkSoAx: best guess, it calls the "get_system" cobbler api function, which returns the big dict
<fwereade> RoAkSoAx: which "get_systems" helpfully gives us just a list of names ;/
<RoAkSoAx> fwereade: it gives us the list of systems with all its information
<RoAkSoAx> fwereade: without filtering management classes
<fwereade> RoAkSoAx: the singular version gives us loads of information
<fwereade> RoAkSoAx: I'm pretty sure the plural version just gives us a listof names
<RoAkSoAx> fwereade: no, both return the system(s) with all the information
<fwereade> RoAkSoAx: ...I'll have to check that, I'm sure I was getting different results
<RoAkSoAx> fwereade: ok
<RoAkSoAx> fwereade: ok so, ltes imagine it returns a list of all systems without filtering management-classes
<fwereade> RoAkSoAx: ok
<fwereade> RoAkSoAx: just checked, you're perfectly right
 * fwereade peers suspiciously at reality
<RoAkSoAx> fwereade: heheh :) anyways, what do you think we should do
<RoAkSoAx> fwereade: 1. describe only the system filtering by management classes or 2. obtain absolutely all the systems and only filter them in _filter_provider_machines on iterate.py
<fwereade> RoAkSoAx: to match ec2, it makes sense to filter so we only get acquired systems
<fwereade> RoAkSoAx: and do that via find_system perhaps?
<RoAkSoAx> fwereade: there's a way to return only the only the set of systems we want
<RoAkSoAx> let me look into that first
<RoAkSoAx> fwereade: but yes that's what I was thinking too
<fwereade> RoAkSoAx: I changed some get_systems-and-itrate code to use find_system a while ago
<RoAkSoAx> fwereade: anyways, I guess we are on the same page now, I'll finish this stuff and fw it to you for review/upload/tests or whatever you need to do to get it merged ;)
<fwereade> RoAkSoAx: the troble is I can imagine cases where either has pathologically bad performance
<fwereade> RoAkSoAx: sounds good :)
<fwereade> RoAkSoAx: btw, I just pushed the latest set of updates to shadow-trunk
<RoAkSoAx> fwereade: cool, will pull now ;)
<RoAkSoAx> fwereade: but it looks like by friday we would have shadow-trunk up and running perfectly in preparation for the sprint ;)
<fwereade> RoAkSoAx: fingers crossed :)
<hazmat> hmm.. i think i've hit a zk binding bug
<niemeyer> Uh oh
<robbiew> http://www.sadtrombone.com
<hazmat> the version doesn't increment publicly on acls
<hazmat> working hypothesis set_acl doesn't seem to modify version
<niemeyer> hazmat: I was kind of expecting this
<niemeyer> hazmat: I wouldn't expect watches to fires with an acl change
<niemeyer> s/fires/fire
<hazmat> niemeyer, makes sense, just not what i had done with the acl node modifications with a version retry loop, i guess there isn't anything to do with concurrent mods here. interesingly it does the right things for errors here, as using the same  current version for multiple set_acls gives a badversionerror
<niemeyer> hazmat: That sounds suspect
<niemeyer> hazmat: It'd mean a node can only have its acl changed once
<hazmat> niemeyer, per version
<niemeyer> hazmat: Yes..
<hazmat> hmm
<hazmat> nevermind, i'm still missing the bug somewhere
<hazmat> i'm getting badversionerrors out of the acl modification code, even though i'm definitely passing in a current version
<hazmat> niemeyer, aha.. that is the bug.. but only if version == 0
<RoAkSoAx> fwereade: ping
<fwereade> RoAkSoAx: pong
<niemeyer> hazmat: Well, it probably expects the current version to be provided
<niemeyer> hazmat: But it won't update the version
<niemeyer> hazmat: So you can set multiple times
<niemeyer> hazmat: Is that right?
<hazmat> niemeyer, the current version is being passed in all cases, if node version > 1, works fine, you can setacl multiples in a version, unless the node version == 0, ie the intial node content.
<hazmat> then it raises badversionexc
<RoAkSoAx> fwereade: I was looking at the connect.py from ec2, and I was wondering that the SSHClient stuff might explain why when obtaining the data for cloud-init for any deployed systems, it checked for the keys
<RoAkSoAx> fwereade: were you able to look into that further?
<niemeyer> hazmat: Interesting..
<hazmat> RoAkSoAx, i'm not familiar with the orchestra integration, but if cloud-init is being processed correctly, ensemble should already  be passing in the keys for sshclient to work correctly
<niemeyer> hazmat: Sounds like a bug indeed
<fwereade> RoAkSoAx: I'm afraid I never did
<RoAkSoAx> hazmat: right, the problem we had was that when I deployed, the zookeeper machine, checked to see if it had public ssh keys in the zookeeper, and we werent able to identify why but now that you are saying that the client machine should pass the keys to the zookeeper, that would make sense and should work
<niemeyer> By the way, I'm working on the Formula Repository spec!
<niemeyer> It took just an year to start this.. I guess I'll drink something tonight to celebrate
<fwereade> well begun is best begun, and best begun is nearly finished :P
<robbiew> m_3: what's your shirt size..3X or 2X :)
<hazmat> niemeyer, actually it looks like you can't do multiple sets in a single version, you need to increment the version beyond what any of the node apis say the current version is for a subsequent set to work against, http://pastebin.ubuntu.com/658008/
<hazmat> regardless of the value of version
<niemeyer> hazmat: This test doesn't give away much in terms of what's actual behavior (vs. what expected)
<hazmat> RoAkSoAx, it happens different depending on if its the bootstrap  node or seeding zk for the provisioning agent
<hazmat> RoAkSoAx, for the bootstrap node the client key comes from cloud-init, for the provisioning agent, on the initial deploy the ssh keys are transported along with the materialized environment config to zk
<hazmat> er. ssh pub key that is
<hazmat> niemeyer, the failure catches are the ones setting the acl multiple times, i caught to verify with multiple version numbers, its simpler without the additional/redundant check
<niemeyer>         # Now i can do multiple setacls against the same version
<niemeyer>         # Fail if set multiple in same version
<niemeyer> hazmat: The two sentences disagree..
<hazmat> niemeyer, yeah.. its rough.. the principals should be different as well
<niemeyer> hazmat: So it fails every time?
<niemeyer> hazmat: When setting multiple?
<hazmat> niemeyer, yes, using setacl multiple times against the same version fails, and the publicly available node version doesn't change with setacl
<niemeyer> hazmat: Would be worth understanding the background about this
<niemeyer> hazmat: It feels like a very awkward behavoir
<niemeyer> hazmat: It must be keeping track of the version in which ACLs are being set, and preventing following changes
<niemeyer> hazmat: Which feels like additional work that would have to be explicitly done, rather than an oversight
<hazmat> niemeyer, i see now.. there is another version number tracking all node changes
<hazmat> {'pzxid': 3L, 'ctime': 1312387130343L, 'aversion': 2, 'mzxid': 6L, 'numChildren': 0, 'ephemeralOwner': 0L, 'version': 1, 'dataLength': 7, 'mtime': 1312387130408L, 'cve\
<hazmat> rsion': 0, 'czxid': 3L}
<hazmat> aversion has the right version mod to use
<hazmat> tracked separately from content to not effecting change watches
<niemeyer> hazmat: Aha, phew
<m_3> robbiew: better go 3x if you're doing t-shirts
<robbiew> m_3: ack
<m_3> robbiew: thanks!
<fwereade> niemeyer: would you object to my making the machine_data/format_cloud_init-args change in a separate branch? I think it's going to add a bit too much noise to cobbler-launch-machine, but having it in a new branch would be a good opportunity to replace the machine_data bag that we all agree is suboptimal
<fwereade> need to be off, I'm afraid, see you all tomorrow
<fwereade> reviews of my active mps always welcome ;)
<_mup_> ensemble/states-with-principals r304 committed by kapil.thangavelu@canonical.com
<_mup_> refactor security test integration, additional encapsulation of security convience apis into a domain node adapter.
<_mup_> ensemble/states-with-principals r305 committed by kapil.thangavelu@canonical.com
<_mup_> machine state creation security integration using the domain node adapter
<fwereade> hey... was there a meeting at which I should have been just now?
<RoAkSoAx> fwereade: late-command generation is broken i think
<RoAkSoAx> fwereade: it fails to execute during installation
<fwereade> RoAkSoAx: curses: since when? I appear to have totally broken something here, but I think I've somehow screwed up a version of a dependency
<RoAkSoAx> fwereade: since right now... yesterday I was working with an old zookeeper I had running cause i couldn't install from mini iso for some reason
<RoAkSoAx> fwereade: but today installation works, but late-command execution fails
 * fwereade looks crestfallen
<fwereade> RoAkSoAx: I don't suppose you have a working ks_meta lying around on an old machine you can diff against?
<fwereade> an old VM, from yesterday or something, I mean
<RoAkSoAx> fwereade: i don
<RoAkSoAx> fwereade: i don't
<RoAkSoAx> unfortunately
<RoAkSoAx> fwereade: ok found the error :P
<niemeyer> fwereade: re. your previous question
<niemeyer> fwereade: I feel half hearted about it..
<niemeyer> fwereade: :)
<fwereade> RoAkSoAx: cool :) what did I do?
<RoAkSoAx> fwereade: seems to be something wrong with _KSMETA_LATE_COMMAND_TEMPLATE formatting
<RoAkSoAx> fwereade: I did this: _KSMETA_LATE_COMMAND_TEMPLATE = ("in-target sh -c 'f=$1; shift; echo $0 | base64 --decode | gunzip > $f && chmod u+x $f && $f $*' %s /root/late-command") and it works
<RoAkSoAx> as it should
<niemeyer> fwereade: The branch had quite a few points that made it look suboptimal for merging
<fwereade> the launch-machine one?
<niemeyer> fwereade: If you feel it's in a good state, I'll be happy to review
<fwereade> niemeyer: sorry, which branch are you talking about?
<niemeyer> fwereade: It's about your question:
<niemeyer> <fwereade> niemeyer: would you object to my making the machine_data/format_cloud_init-args change in a separate branch
<RoAkSoAx> fwereade: but now cloud-init is failing!! yay!!
<fwereade> niemeyer: cobbler-launch-machine is still definitely WIP
<fwereade> RoAkSoAx: yay! progress!
<niemeyer> fwereade: Ok.. so keep pushing
<fwereade> niemeyer: but I feel it's a little heavy already, and I thought it might be sensible to keep the machine_data/cloud_init changes to their own branch
<niemeyer> fwereade: My impression of the branch on the review was that it needed fixing.. when you feel it's fixed, please push for review again, and mention what's still open or not, and we can take a decision
<fwereade> niemeyer: ok, sounds good. thank you :)
<niemeyer> fwereade: No worries
<RoAkSoAx> fwereade: yes seems to be the formatting of _KSMETA_LATE_COMMAND_TEMPLATE
<fwereade> RoAkSoAx: excellent (well, you know what I mean ;))
<RoAkSoAx> fwereade: apply this patch please: http://paste.ubuntu.com/658080/
<fwereade> RoAkSoAx: thank you, that looks eminently sensible
<niemeyer> RoAkSoAx: Huh
<RoAkSoAx> niemeyer: ?
<niemeyer> RoAkSoAx: Trying to understand why we're doing all of that
<fwereade> RoAkSoAx: safely saved to be applied tomorrow, I'm afraid; I have to dash again for now (*might* make it back later)
<RoAkSoAx> fwereade: aif you could pply it to shadow-trunk in the meantime would be great
<niemeyer> RoAkSoAx: This feels like a very complex way to say echo %s | base64 --decode | sh -
<RoAkSoAx> niemeyer: that's what be were doing before, but fwereade missed those bits :)
<niemeyer> I won't complain too much, but I'd probably shrink it down a bit
<RoAkSoAx> niemeyer: yes, but as discussed in MIA, i preferred to keep it cause of the creation of late-command for debugging purposes
<RoAkSoAx> niemeyer: as was also recommended by smoser 
<smoser> niemeyer, i dont think that it being complex is necessarily a good reason to not like it.
<smoser> your suggested replacement needs a few changes to actually work.
<smoser> base64 is not necessarily available outside target, you need 'in-target' somehwere.
<smoser> the real benefit of doing it the way i have it is that it is not tied to the output of base64 being executed by 'sh'
<smoser> also, it handles passing parameters to the command.
<niemeyer> smoser: Being unnecessarily complex is the reason I don't like it..
<smoser> not unnecessarily complex to address those 2 features.
<smoser> oh, and gzipped (yours would need gzip in the pipeline)
<niemeyer> smoser: What are the features you'd like to see there?
<smoser> gzip compression, ability to see the script on disk after you've used it (debugging), ability to pass arguments to the script [correctly quoted], ability to execute non 'sh' content without changing that ks macro
<niemeyer> smoser: in-target /bin/sh -c "echo %s | base64 --decode | gunzip > $1 && chmod +x $1 && $1" /root/late-command
<niemeyer> smoser: I'd drop gzip.. but that's another topic
<smoser> i dont know. i'd have ot play with it.
<smoser> you cant use single double quotes reliably
<smoser> do to parsing of ks_args
<smoser> but i admit that that is fuzzy in my recollection
<smoser> the other hting that ish't present in yours is the ability to pass arguments to the command you're executing
<smoser> (which could easiliy be other ks_args)
<smoser> which is what the shift; and $* accomplish
<smoser> but i think this conversation is all very bikeshed. you have something that works, tested as works, and you're just trying to change it so you can spend more time testing.
<smoser> comparing what yo uare suggesting and what i have, the only difference i see is:
<smoser>  * use of double quotes (which i think is causing the problem)
<smoser>  * use of a variable name
<smoser> anyway
<_mup_> ensemble/states-with-principals r306 committed by kapil.thangavelu@canonical.com
<_mup_> enable group members to be added via the domain security object to allow for disabling security to increase test speed.
<smoser> oh, niemeyer and yours above has a bug.
<smoser> you need to use '$0' in that case, as '/root/late-command' is the 0th argument, not the first.
<niemeyer> smoser: That's because you're using shift within a one-liner
<niemeyer> smoser: Which is strange
<smoser> i dont think so.
<smoser> but...
<smoser> but your $1 is wrong
<niemeyer> smoser: Yes.. that's what shift does
<niemeyer> $0 is the executable name by default
<niemeyer> % /bin/sh -c "echo \$0"
<niemeyer> /bin/sh
<smoser> $ sh -c 'echo 0=$0; echo 1=$1' "hi mom" "arg1"
<smoser> 0=hi mom
<smoser> 1=arg1
<niemeyer> smoser: Crazy.. you're right.. /bin/sh changes depending on args
<smoser> of course i'm righ
<smoser> :)
<smoser> it is confusing. i agree.
<niemeyer> smoser++
<niemeyer> ;)
<niemeyer> smoser: Still, you see the point?  It feels like there's more being done than necessary
<niemeyer> smoser: That said, I won't complain too much.. this is minor, and if you feel strong about a particular approach, let's take it
<smoser> so.. the thing i like about the way i did it is that you dont muck with the internals of the 'sh' command.
<smoser> you're just modifying what arguments you pass to it
<smoser> (ie, the '%s' is outside the single ticks)
<smoser> and you can pass additional arguments also and they'll get passed onto the content of the script.
<smoser> but either way can be made to work.
<niemeyer> smoser: Yeah, it's cool
<niemeyer> smoser: We can keep as it is if you're more comfortable that way
<smoser> well if it works i'd say keep it.
<smoser> if it breaks, fix it
<smoser> :)
<niemeyer> smoser: From the messages I read above, it wasn't working either way 
<niemeyer> smoser: Which is why I suggested a simpler approach
<smoser> RoAkSoAx said it works for him
<RoAkSoAx> smoser: niemeyer it does work for me
<RoAkSoAx> niemeyer: the problem is fixed, The command was incorrectly copied from the bootstrap-cobbler branch to the shadow-trunk branch we are wroking on top of
<niemeyer> smoser: ^
<niemeyer> All good..
<hazmat> sigh.. getting security integrated into the tests is proving painful
 * hazmat wonders if all of the core dev team have ssds
<hazmat> yeah.. several of the status tests time out on a rotating hd, definitely don't on ssd
<hazmat> hmm
<hazmat> or maybe this is more integration fun
 * robbiew prepares PO for ssds
<robbiew> :P
<hazmat> ftw ;-)
<robbiew> hazmat: you have an x220, right?
<hazmat> robbiew, i do.. haven't picked up a 7mm ssd yet or the msata ssd.. been putting my piggybank towards a nas ;-).. the x220 support's been somewhat problematic atm, kernel panic on sleep.. haven't really dug into it  much
<robbiew> yeah....I'm running 11.10 and the power usage is shit
 * robbiew goes to poke manjo...he's our "inside guy" with Lenovo ;)
<hazmat> outside of that.. its really nice hardware, the screen is great.. my battery life is decent (sadly with a nine cell).. i'm hoping to pickup the battery slice if i have to do any long trips
<robbiew> yeah...I got the touchscreen version...not bad...but the utouch stuff doesn't work
<robbiew> I'm going to pester the appropriate folks at next UDS though ;)
<niemeyer> I'm starting to feel happier about the namespaces concept in the repo spec
<niemeyer> I'll try to push this for an early review sometime tomorrow
<niemeyer> I'm stepping out for the night, I think :)
<niemeyer> Have a good time all
<robbiew> look out!
<robbiew> lol
<_mup_> ensemble/states-with-principals r307 committed by kapil.thangavelu@canonical.com
<_mup_> encapsulate removal of group members into security adapter, utilize decorator for tests that need security (when the default is off)
<hazmat> i wish profiling where more useful with twisted
#ubuntu-ensemble 2011-08-04
<_mup_> ensemble/states-with-principals r308 committed by kapil.thangavelu@canonical.com
<_mup_> agent tear down uses relies on base class for zk tree cleanup, also increase status test timeout for running with security
 * hazmat yawns
<_mup_> Bug #820892 was filed: machine_data is a formless blob <Ensemble:New> < https://launchpad.net/bugs/820892 >
<doitdistributed> Hi there
<doitdistributed> does anybody know if ensemble only works with AWS so far? Or is it possible to use it with other providers as well
<fwereade> doitdistributed: we're working on orchestra support, to use it on bare metal with cobbler
<fwereade> doitdistributed: it's not ready for primetime yet, but we have a sprint next week and we hope to make some good progress there
<doitdistributed> cool
<fwereade> doitdistributed: right now though, if you want to play with it, AWS is the way to go
<doitdistributed> yes I'm already done ;-)
<doitdistributed> I like it and as well the architectural concept
<fwereade> doitdistributed: cool, I hope it's working nicely for you :)
<fwereade> doitdistributed: awesome :D
<doitdistributed> I try to figure out if it could be part of my Ph.D. thesis as well, but therefore the multicloud approach must be supported ;-)
<hazmat> doitdistributed, the ec2 api also works against openstack and eucalyptus for private cloud providers
<doitdistributed> I think I will give an introduction about ensemble in the next AWS User Group meetup I'm hosting
<hazmat> doitdistributed, at the moment we don't support cross cloud communications 
<doitdistributed> @hamat cool I will try it with EUCA
<hazmat> for a single environment
<doitdistributed> and openstack
<hazmat> doitdistributed, cool, let us know if you have any issues
<fwereade> hazmat: ha, yes, the API makes me think of them as basically AWS ;)
<doitdistributed> I will give you a feedback and as well what the AWS Users think. 
<fwereade> doitdistributed: thanks very much!
<doitdistributed> no problem ;-) 
<doitdistributed> Hey, are you interested in an article about Ubuntu Cloud or ensemble in our Cloud Computing magazine? 
<doitdistributed> here are the links http://issuu.com/symposiajournal
<doitdistributed> http://symposiajournal.de/
<doitdistributed> I think it would fit good
<fwereade> doitdistributed: I bet we would be :)
<fwereade> doitdistributed: unhappily, my german was not very good 10 years ago, and is now almost gone, so I'm making only very slight sense of those pages ;)
<doitdistributed> ;-) No problem 
<doitdistributed> maybe we could refresh it sometimes with a beer or so
<fwereade> doitdistributed: I'd be delighted -- but I don't suppose you're in Malta? ;)
<doitdistributed> not daily ;-)
<jcastro> ok everyone, here's the ensemble report for the week, I think I've covered most everything this week: http://pad.ubuntu.com/ensemble-report
<jcastro> any feedback would be appreciated
<fwereade> doitdistributed: yeah, it's not exactly on the beaten track ;0
<doitdistributed> But mayby there  http://cloud-devcon.com 
<fwereade> doitdistributed: hey, that looks cool
<doitdistributed> Yeah I'm one of the organizer 
<doitdistributed> it will be a great event, with cool people, a big party and all inclusive
<fwereade> doitdistributed: awesome, I will try to figure out if I can make it, all I can remember is that my travel plans are confused for the next few months ;)
<fwereade> joining Canonical has been a bit of a shock to the system on that front ;)
<doitdistributed> Would be great!!! And we could have a beer or two ;-)
<fwereade> doitdistributed: absolutely :)
<doitdistributed> Do you think the price is ok? I mean we are new to the "business" and we want to make no business ;-) We only want cool events from the community to the community
<niemeyer> Hello Ensemblers!
<kirkland> niemeyer: morning
<niemeyer> kirkland: Yo
<RoAkSoAx> fwereade: howdy!
<fwereade> RoAkSoAx: heyhey!
<RoAkSoAx> fwereade: how's it going today?
<fwereade> niemeyer: and hey :)
<niemeyer> fwereade, RoAkSoAx: Hey lads!
<fwereade> RoAkSoAx: good, I think, the structure for the shared shutdown stuff is on the tip of my tongue (fingers?)
<fwereade> RoAkSoAx: how about you?
<RoAkSoAx> fwereade: go for it if your finger tips are itching :P
<fwereade> RoAkSoAx: I'm back in thinking mode, but I've got some stuff that looks very similar, it's just a matter of drawing the boundaries in the right place, and hoping it doesn't end up too baroque
<RoAkSoAx> fwereade: well the shutdown stuff should be pretty simple
<RoAkSoAx> fwereade: btw.. this would be the connect.py http://paste.ubuntu.com/658669/
<fwereade> RoAkSoAx: it is, the tricky bit is making that and the ec2 stuff work in the same way so we don't end up missing features on one side or the other
<RoAkSoAx> fwereade: i'll take a look at it later today if you want and maybe I can contribute some more ideias on how to do it
<fwereade> RoAkSoAx: thanks :)
<RoAkSoAx> fwereade: anywa,s  I have one question about the connect stuff, _wait_for_initialization that client.exists_and_watch("/initialized") where's that at? that's done by the zookeeper?
<fwereade> RoAkSoAx: I've got the impression that your additions to cobbler.py are pretty close to the ec2 interface, which is really handy
<fwereade> RoAkSoAx: my understanding is that when we initialise the admin, the last thing it does is create /initialized
<fwereade> RoAkSoAx: and so when that's there we the system is ready
<fwereade> we know^^^
<RoAkSoAx> fwereade: ok, so that does not need to have any changes thne
<RoAkSoAx> fwereade: as it should work exactly the same way as with ec2
<fwereade> RoAkSoAx: yep, exactly
<RoAkSoAx> fwereade: now, the only thing is that we cannot use _check_machine_age as it seems that machine.launch_time comes from ec2, correct?
<fwereade> RoAkSoAx: yeah, that's right
<RoAkSoAx> fwereade: the only way we could provide that information would be to pass it the time on when the command was executed and *supposed* to launch a machine with orchestra
<RoAkSoAx> fwereade: which might be suboptimal
<fwereade> RoAkSoAx: yeah, it doesn't feel right
<RoAkSoAx> fwereade: alright, I guess i'll take note of this one too 
<RoAkSoAx> fwereade: I think that in the future we'll have to make orchestra server (or cobbler) a bit smarter xD
<fwereade> RoAkSoAx: yeah definitely ;)
<RoAkSoAx> fwereade: ok, so iterate.py, accessor.py and connect.py should be done
<RoAkSoAx> fwereade: however, all of that needs to be tested with deploying machine
<RoAkSoAx> fwereade: which is something i'm gonna work on next
<RoAkSoAx> fwereade: after that I'll provide you with one branch for you to review, and start making smaller branches
<RoAkSoAx> does that sounds good?
<fwereade> RoAkSoAx: cool, shadow-trunk should be up to date (including the stuff you wanted yesterday)
<fwereade> RoAkSoAx: sounds like a plan
<RoAkSoAx> fwereade: yeah just pulled from there
<fwereade> niemeyer: how set in stone is the MachineProvider interface? I'd quite like to change shutdown and shutdown_machine
<niemeyer> fwereade: Nothing is set on stone.. what would you like to achieve there?
<fwereade> niemeyer: I'm not totally sure yet... but it feels like shutdown_machines, taking a list of machines, is going to make everything else fit better
<fwereade> niemeyer: plain old shutdown would actually be unchanged, it's just get all machines and pass them to shutdown_machines
<niemeyer> fwereade: Sounds quite reasonable.. how do we have to change MachineProvider for that?
<fwereade> niemeyer: those are part of MachineProvidr's interface
<fwereade> niemeyer: I'll experiment, hopefully I'll have something to show soon
<niemeyer> fwereade: Wait.. duh, ok
<niemeyer> fwereade: Yeah, happy with that
<niemeyer> fwereade: When you said MachineProvider, in my mind I thought about the Machine interface itself
<niemeyer> My bad
<fwereade> niemeyer: no worries :)
<_mup_> ensemble/states-with-principals r309 committed by kapil.thangavelu@canonical.com
<_mup_> add another security check around removing units, fix an old service test that running (missing inlinecallbacks)
<_mup_> ensemble/states-with-principals r310 committed by kapil.thangavelu@canonical.com
<_mup_> subsume some of the module level helper functions into the security adapter class.
<hazmat> bcsaller, niemeyer, fwereade team meeting in a few minutes
<bcsaller> yep :)
<niemeyer> Ouch.. forgot to have lunch
<fwereade> is that google+ then?
<niemeyer> Okay, is it time?
 * niemeyer starts a hang out
<niemeyer> fwereade: Just invited you
<niemeyer> bcsaller: and you
<niemeyer> hazmat: and you
<fwereade> later all
<_mup_> ensemble/states-with-principals r311 committed by kapil.thangavelu@canonical.com
<_mup_> move otp consumption onto the security adapter.
<niemeyer> Lunch time.. biab
<_mup_> ensemble/states-with-principals r312 committed by kapil.thangavelu@canonical.com
<_mup_> remove an unused group api method, move some otp helper api implementation from its class to the security adapter.
<_mup_> ensemble/states-with-principals r313 committed by kapil.thangavelu@canonical.com
<_mup_> additional test for ACL grants multiple times against the same node version.
<_mup_> ensemble/states-with-principals r314 committed by kapil.thangavelu@canonical.com
<_mup_> add an environment variable to allow for testing the entire system with security enabled, security integration with relation state creation.
<_mup_> ensemble/trunk-merge r274 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<niemeyer> Man.. open source is truly awesome..
<niemeyer> I can't emphasize that enough
<niemeyer> feedbooks.net is using mgo to serve all of their book covers, in production, today.
<niemeyer> feedbooks.com, sorry
<niemeyer> I feel more confident our repository will work fine with it now.. ;-)
<_mup_> Bug #821074 was filed: security integration, sequence nodes or topology dependencies require explicit acl application, otp principal integration with state creation <Ensemble:In Progress by hazmat> < https://launchpad.net/bugs/821074 >
<niemeyer> hazmat: Ugh.. 2.5k lines of diff
<hazmat> hmm
<hazmat> niemeyer, ugh.. i forgot the prerequiste branch spec
<niemeyer> Phew
<hazmat> niemeyer, fixed, diffstat shows just under 700 lines
<niemeyer> hazmat: Phew, that's awesome, thanks
<hazmat> niemeyer, that should be the last/only large branch for the security work
<RoAkSoAx> niemeyer: are there any more orchestra-related branches yet to be merged to trunk?
<niemeyer> RoAkSoAx: There are
<_mup_> ensemble/security-policy-rules r285 committed by kapil.thangavelu@canonical.com
<_mup_> evaporate branch, to be resurrected latter in the pipeline.
<niemeyer> RoAkSoAx: This should always provide a snapshot of what is going on: http://ensemble.ubuntu.com/kanban/dublin.html 
<RoAkSoAx> niemeyer: ok, cool I was hoping to see ensemble deploying from trunk by tomorrow :)
<niemeyer> RoAkSoAx: If you see branches from fwereade in the center column on in In Progress, there's likely something to be merged
<niemeyer> RoAkSoAx: Sweet!
<niemeyer> RoAkSoAx: I'm doing some spec writing ATM, but I hope to clean up that review queue by today still, so that William has it ready by the time he wakes up
<RoAkSoAx> niemeyer: cool! I will submit my work to him for tomorrow when he wakes up to make sure is according to standards and then I guess those branches will be proposed for merging
<niemeyer> RoAkSoAx: Woot
<RoAkSoAx> niemeyer: cause I have it deploying already from shadow-trunk
<RoAkSoAx> which is the branch I'm working on top of
<niemeyer> RoAkSoAx: That's awesome news indeed!
<niemeyer> RoAkSoAx: How's the Cobbler side of things?
<RoAkSoAx> niemeyer: its good for now. there's some things that I'd like to discuss at the sprint to get the interaction improved
<niemeyer> RoAkSoAx: Cool
<niemeyer> RoAkSoAx: Do we have existing hacks still to be merged, or is that trunk interaction already working with stock Cobbler from Ubuntu?
<RoAkSoAx> niemeyer: it is working with stock cobbler, but, there's some changes in preseeds and stuff like that yet to be merged into orchestra meta-package (probably orchestra-ensemble)
<RoAkSoAx> niemeyer: that will land next week
<RoAkSoAx> niemeyer: and autoconfiguration of a webdav server as well
<niemeyer> RoAkSoAx: Aha, gotcha
<niemeyer> RoAkSoAx: You rock dude
<RoAkSoAx> all of that will be done in orchestra
<RoAkSoAx> niemeyer: heh... thanks but SpamapS openned the door and fwereade did an awesome job witht he refactoring
<niemeyer> RoAkSoAx: Yeah, other folks rock too :-)  We wouldn't be where we are without all of these fitting together.
<niemeyer> Awesome team work
<niemeyer> smoser should be in that list too
<niemeyer> SpamapS, bcsaller, hazmat, RoAkSoAx, m_3, jcastro, all: Quick bikeshed
<niemeyer> We need a prefix for formulas in the repo..
<jcastro> like what, a designation that it's .... ?
<niemeyer> I don't want to use lp: to avoid confusion between the formula namespaces and the branches
<RoAkSoAx> niemeyer: indeed
<niemeyer> We need something like foo:oneiric/formula
<niemeyer> What's "foo"?
<SpamapS> ensemble:oneiric/formula ?
<niemeyer> ensemble deploy ensemble:... doesn't look nice
<jcastro> I was just thinking form, frm, or just formula
<niemeyer> frm.. hmmm
<SpamapS> being palindromic.. I like recursion
<jcastro> because that's what you're thinking in your head
<niemeyer> frm is nice
<jcastro> ensemble deploy $formula from $here
<SpamapS> this is the prefix that says "use the default repositories" ?
<niemeyer> SpamapS: No, this is the prefix that says "this is about the repository"
<SpamapS> ah
<niemeyer> repo: might also apply
<niemeyer> "frm".. "repo"...
<niemeyer> What else?
<jcastro> source?
<RoAkSoAx> efr
<SpamapS> so if its not there.. what do you get?
<niemeyer> efr..
<RoAkSoAx> efr =ensemble formula repository
<hazmat> niemeyer, ensemble deploy ubuntu:xyz ?
<RoAkSoAx> L)
<niemeyer> SpamapS: We're sorting the naming convention only.. the spec with details is coming
<niemeyer> hazmat: ubuntu..
<niemeyer> "ubuntu", "frm", "repo", "efr"..
<RoAkSoAx> niemeyer: i'd would agree with hazmat  too
<niemeyer> Anything else?
<hazmat> although perhaps that doesn't capture distro.. but it could be a nice symbolic for default distro latest and qualified with ubuntu:oneiric, ubuntu:natty etc
<SpamapS> niemeyer: mmk. Given that limited context, repo seems generic and at the same time memorable enough.
<jcastro> "forma" is latin for form.
<SpamapS> ubuntu: has special meaning in bzr now too.. its an alias for lp:ubuntu/
<niemeyer> "store"
<niemeyer> store:oneiric/wordpress
<niemeyer> "ubuntu", "frm", "repo", "efr", "store"..
<niemeyer> jcastro: Feels too close to formula.. reads like a typo almost
<niemeyer> "fr", along the lines of RoAkSoAx suggestion (Formula Repository)
<jcastro> repo or store seems easiest to remember, we use "ubuntu" everywhere, so that might be confusing, frm and efr are just acronyms and would be harder to remember. 
<SpamapS> have to step out for a bit, but repo: is, I think, my vote
<niemeyer> "ubuntu", "frm", "repo", "efr", "fr", "store"..
<niemeyer> SpamapS: Cool, thank you
<niemeyer> I'll likely send this to the list
 * jcastro agrees with SpamapS for "repo", straight forward, easy to remember
<bcsaller> with a fallback to the environment?
<bcsaller> re: the distro
<niemeyer> bcsaller: Hm?
<bcsaller> I was thinking about what hazmat was saying about ubuntu:natty or whatever, I don't know that you'd want that in the repo naming so much as in the environment you're deploying to 
<bcsaller> but using it the name of the package like in the email is fine 
<bcsaller> when is it not this magic token?
<bcsaller> wouldn't that be the only time we'd need it?
<bcsaller> _:formula where _ is only needed when its not the default
<bcsaller> surprised no on listed principia: as well
<niemeyer> bcsaller: This is just about the prefix really.. the details semantics will be posted to review 
<jcastro> I thought that name was going away
<bcsaller> jcastro: I think you're right but I'm not sure
<jcastro> nor me
<niemeyer> Right, principia is being debated.. principia may be finita
<niemeyer> None of us is sure, actually
<bcsaller> niemeyer: I only mean we don't need a prefix for the default which is the place we are trying to name, its only when its not that default that you need a prefix, no?
<niemeyer> It's being debated by Elfos..
<niemeyer> bcsaller: Maybe..
<niemeyer> bcsaller: That's an interesting point
<bcsaller> unless I misunderstand what the prefix is, I thought it was the mapping between a namespace of packages and the place to get them from, where you don't name the default 
<niemeyer> bcsaller: ensemble deploy --repository=/tmp/oneiric deploy foo
<niemeyer> bcsaller: Where does foo come from?
<bcsaller> whats now principa (or the default repo for your environment)
<bcsaller> but even in the case in ()'s its only named if you step away from the default
<niemeyer> bcsaller: Ok.. another issue..
<niemeyer> bcsaller: ensemble deploy ~bcsaller/foo
<niemeyer> bcsaller: What does that do?
<bcsaller> throw an error, there is no package named that in the namespace
<bcsaller> --repository is needed to specify a local repo path
<niemeyer> bcsaller: Yeah, but I was actually talking about a remote formula
<niemeyer> bcsaller: What does this do:  ensemble deploy repo:~bcsaller/foo
<niemeyer> ?
<niemeyer> bcsaller: Quite obvious, right?
<niemeyer> bcsaller: So you make an interesting point.. maybe we can get away with the prefix.. but there are a few problems which the prefix solves that I don't have a good answer for myself
<bcsaller> no? I think it would be namespace:package where that a reference to a name mapping somewhere else like apt sources almost 
<bcsaller> bcsaller lp:~bcsaller/ensemble in a config file
<niemeyer> bcsaller: No?  You don't get the idea that repo:~bcsaller/wordpress is bcsaller's wordpress in the repo?
<bcsaller> then deploy bcsaller:foo
<bcsaller> to me that assumes too much lp, but maybe I'm wrong there
<niemeyer> Maybe.. hmm
<bcsaller> because other SCM systems will still be mapped into lp on publish it could work, but I don't know if the best path fwd
<niemeyer> I actually like the idea of oneiric/wordpress etc
<_mup_> ensemble/states-with-principals r316 committed by kapil.thangavelu@canonical.com
<_mup_> merge conflict from removal of security-policy-rules
<niemeyer> I don't like joe:oneiric/wordpress too much, though
<niemeyer> It's conflicting..
<bcsaller> having indirection there like I suggested in the namespace would create its own set of issues though, mostly that following instructions would be hard if people had different prefix names
<bcsaller> yes
<bcsaller> I think what I just said kills the idea, its less repeatable 
<niemeyer> Yeah.. another issue is that everyone here knows what "bcsaller" is
<niemeyer> What about
<niemeyer> obsoleted:oneiric/wordpress
<niemeyer> That's not nice
<niemeyer> store:~obsoleted/oneiric/wordpress
<niemeyer> That's more obvious
<bcsaller> yeah, I think the issue that repo: implies 1 to me was the issue
<bcsaller> whats the company local repo called that overlays the public one?
<niemeyer> bcsaller: In which sense (implies 1)?
<niemeyer> Hmm
<bcsaller> repo always refers to lp, no?
<niemeyer> bcsaller: No.. it refers to our repository system
<niemeyer> bcsaller: It's not necessarily a 1-1 translation
<niemeyer> bcsaller: Hmm
<bcsaller> directly exposing lp names, but ok, I'll give you that 
<niemeyer> bcsaller: Maybe we can break the proposal in half
<niemeyer> E.g.
<_mup_> ensemble/security-policy-with-topology r317 committed by kapil.thangavelu@canonical.com
<niemeyer> oneiric/wordpress, but personal:bcsaller/oneiric/wordpress
<_mup_> policies with topologies
<niemeyer> bcsaller: No, it's not directly exposing _lp_ names
<niemeyer> bcsaller: It's not a 1-1 translation
<niemeyer> bcsaller: Multiple repo names can refer to the same branch, for instance
<niemeyer> bcsaller: The precise semantics will be in the spec, and will be up for debate
<bcsaller> niemeyer: ok, I can see how that works, but what about the company local repo? It seems like there will be 'main', some companies collection and then maybe an developers private repo on top of that 
<_mup_> Bug #821109 was filed: security policies need to have an accessor for topology and utilized with a modified topology <Ensemble:New> < https://launchpad.net/bugs/821109 >
<bcsaller> it could be that those are merged under the repo: prefix by configuring access to that service though. just thinking out loud
<niemeyer> bcsaller: Yeah, the companies collection can be put on a custom namespace
<niemeyer> bcsaller: Maybe even referred to by URL
<bcsaller> niemeyer: I know you have a plan for this, I don't mean to make you explain it all here 
<niemeyer> bcsaller: Same way Bazaar enables lp: but full blown URLs too
<niemeyer> bcsaller: Well, maybe I don't have a good plan yet..
<bcsaller> heh
<niemeyer> bcsaller: What's your specific concern regarding company repos?
<niemeyer> bcsaller: Seriously, I'm not being factitious
<bcsaller> I just want it to be clear and convenient to specify that you want a more custom version from another repo than the one in main 
<niemeyer> bcsaller: Just saying I'd like to debate semantics in the context of the spec, since there are more explanations written down, but still interested on your idea of why company prefixes are problematic
<niemeyer> bcsaller: Cool, sounds good
<niemeyer> bcsaller: Ok, thanks a lot
<niemeyer> bcsaller: I think there's a very good middle path there that we can follow based on your idea of avoiding a prefix when feasible
<niemeyer> bcsaller: Will try to put it down in the spec
<bcsaller> niemeyer: happy to talk about it next week if you want
<niemeyer> bcsaller: Absolutely
<_mup_> ensemble/security-policy-rules-redux r317 committed by kapil.thangavelu@canonical.com
<_mup_> resurrect security-policy-rules
<_mup_> ensemble/security-policy-rules-redux r318 committed by kapil.thangavelu@canonical.com
<_mup_> bring back some additional parts of security-policy-rules
<_mup_> ensemble/security-policy-rules-redux r319 committed by kapil.thangavelu@canonical.com
<_mup_> yank accidental add of merge file
<niemeyer> bcsaller: Posted an answer in the list about the intermediate plan.. please let me know if it's along the lines of what you had in mind
<niemeyer> (and solves the issues you've foreseen)
<bcsaller> checking
<bcsaller> niemeyer: that looks good
<niemeyer> I'm tempted to start s/repository/store/ too across the board.. such an easier word to speak about :)
<_mup_> ensemble/security-agents-with-identity r301 committed by kapil.thangavelu@canonical.com
<_mup_> provisioning agents launching machines with machine agent identities/principals
<niemeyer> I feel like I'm writing a book..
<niemeyer> Maybe I should start sending individual sections for review
<robbiew> jcastro: ping
<jcastro> robbiew: pong
<_mup_> ensemble/security-policy-rules-redux r320 committed by kapil.thangavelu@canonical.com
<_mup_> resurrect policy w/ topology, got lost in the merge.
<_mup_> ensemble/security-policy-rules-redux r321 committed by kapil.thangavelu@canonical.com
<_mup_> update security rules to latest api.
<niemeyer> Ok, a couple of pieces pushed
<_mup_> ensemble/security-policy-rules-redux r322 committed by kapil.thangavelu@canonical.com
<_mup_> merge and resolve conflict.
<niemeyer> We need a review..
<niemeyer> Anyone is up for it: https://code.launchpad.net/~fwereade/ensemble/webdav-storage
<niemeyer> https://code.launchpad.net/~fwereade/ensemble/webdav-storage/+merge/69453
<niemeyer> That's the actual URL, actually
<_mup_> ensemble/security-connection r289 committed by kapil.thangavelu@canonical.com
<_mup_> address review comments [1][2]
#ubuntu-ensemble 2011-08-05
<fwereade> niemeyer: heyhey
<niemeyer> fwereade!
<niemeyer> fwereade: What's up?  Ready to rock?
<fwereade> niemeyer: yep, looking forward to it
<fwereade> niemeyer: bit miffed my fancy airport pass hasn't arrived yet, I'll be ravelling nearly 24h :/
<fwereade> travelling^^^
<fwereade> niemeyer: but stlil :)
<fwereade> ok, I can't type.
<niemeyer> fwereade: Welcome to my life
<fwereade> niemeyer: heh :)
<niemeyer> fwereade: 3h in a bus, just to get started..
<fwereade> niemeyer: ouch!
<niemeyer> fwereade: But I'm happy to do it.. just to enjoy the awesomeness of everyone's presence
<fwereade> niemeyer: didn't picture that I must admit
<niemeyer> (honestly)
<fwereade> niemeyer: oh, likewise :)
<fwereade> niemeyer: btw, can you bear another rambling chat about appropriate levels of testing? :p
<niemeyer> fwereade: Absolutely
<fwereade> niemeyer: the starting point is: https://code.launchpad.net/~fwereade/ensemble/cobbler-find-zookeepers/+merge/69907
<fwereade> niemeyer: basically, I'm not sure that directly testing CobblerClient is the right thing to do
<niemeyer> fwereade: Do you agree code should not stay around untested?
<fwereade> niemeyer: definitely
<fwereade> niemeyer: but all that is tested in the context of the tests for the public interactions with the provider itself
<niemeyer> fwereade: So it should be covered.. testing a public interface as a side-effect of other tests is better avoided if we want to maintain coverage over time
<fwereade> niemeyer: ah, so you consider CobblerClient to be a public interface?
<niemeyer> fwereade: There's logic inside that one method, which should be explored in every angle directly, because it's a public interface whose clients can change independently
<niemeyer> fwereade: Well, is it a private method?
<niemeyer> fwereade: It surely looks like a well defined encapsulation
<niemeyer> fwereade: CobblerClient.describe_system
<fwereade> niemeyer: no, but my impression was that most of the accessible stuff in the provider packages was... accientally public, if you like
<niemeyer> fwereade: Feels pretty unity :)
<niemeyer> fwereade: It is internally public precisely because we want to organize it properly and test it adequately
<niemeyer> fwereade: It is not package public
<niemeyer> fwereade: In the sense that other packages should not interact with the internals
<niemeyer> fwereade: There's an analogy I provide every once in a while when this issue comes up
<fwereade> niemeyer: that does make sense
<niemeyer> fwereade: We have a gradient
<fwereade> niemeyer: cool, go on
<niemeyer> fwereade: and gradients are often well understood by exaggeration
<niemeyer> fwereade: On one end of the gradient, we test every line with an individual test.. (ugh)
<niemeyer> fwereade: On the other end of the spectrum, though, we could claim that only the public interface of the _project_ needs testing
<niemeyer> fwereade: So we test every possible interaction though the command line (!)
<niemeyer> fwereade: Why is that latter option crazy?
<fwereade> niemeyer: combinatorial explosion
<niemeyer> fwereade: Exactly
<fwereade> niemeyer: which is what I'm concerned about here...
<niemeyer> fwereade: Which may also be seen as the difficulty of covering every path, and the difficulty of setting up and tearing down the project for so many combinations, and so on
<niemeyer> fwereade: The same thing can be said about testing good encapsulations through their client users
<fwereade> niemeyer: I feel like there are a lot of lines where one change will break many tests
<niemeyer> fwereade: To a lesser degree, of course..
<niemeyer> fwereade: Yes, entirely agreed
<niemeyer> fwereade: and to me preventing that is an artificial goal introduced by the purists of unit testing
<niemeyer> fwereade: In practice,
<niemeyer> fwereade: what happens is that changes are done piece-meal..
<niemeyer> fwereade: and tested very carefully
<fwereade> niemeyer: heh, I've one or twice found myself in a situation where I wished I'd followed that advice a bit better ;)
<niemeyer> fwereade: You'll always know when you change logic which breaks 10 tests instead of 1
<niemeyer> fwereade: This is good..
<niemeyer> fwereade: It means the semantics we've defined as correct were being asserted
<niemeyer> fwereade: Now, imagine that tomorrow.. you change the client code to not use all of describe_system's logic
<niemeyer> fwereade: Which is totally fine
<niemeyer> fwereade: Suddenly we have entirely uncovered logic
<niemeyer> fwereade: Which is truly bad, even more in a dynamic language
<fwereade> niemeyer: well, we have dead code which has the potential to become uncovered logic if we drop the ball on testing in the future, which isn't *quite* as bad
<fwereade> niemeyer: ...but I take your point
<fwereade> niemeyer: I'm just finding it a bit painful to be wilfully duplicating code, even if it is test code ;)
<niemeyer> fwereade: You don't have to duplicate tests..
<niemeyer> fwereade: The tests closer to the implementation are generally more detailed than the ones that take it as a client
<niemeyer> fwereade: You _have_ to cover it all in the tests which are next to the implementation
<fwereade> niemeyer: so, for example, tests for describe_system handle all the crazy stuff I can imagine
<niemeyer> fwereade: It's fine to not cover the sub-method details in the tests that just takes other bits of the implementation
<niemeyer> fwereade: Right, exactly.. you've actually doe that before
<niemeyer> fwereade: E.g.
<niemeyer> fwereade: In the MachineProvider interface
<niemeyer> fwereade: It's just a simple/boring test
<fwereade> niemeyer: but the tests for things which happen to use describe_system can just assume the happy path, and trust the other tests are solid
<niemeyer> fwereade: Well, mostly.. e.g. if describe_system has an error path that raises a problem, and the client code from that interface reacts to such error, you have to cover the reaction within the client code's test
<niemeyer> fwereade: But in that case you're not testing describe_system
<niemeyer> fwereade: You're testing the client reaction to an eror
<niemeyer> error
<fwereade> niemeyer: ...but then we *do* end up having to duplicate the coverage of the details in the distant test
<niemeyer> fwereade: No, not really
<niemeyer> fwereade: The test for describe_system will cover its own behavior which cause the error to be raised
<niemeyer> fwereade: The client test will test the reaction to the fact describe_system raised an error..
<niemeyer> fwereade: Simple example:
<fwereade> niemeyer: ok, but we still need to mock all the external interactions which would cause that error, right?
<niemeyer> fwereade: describe_system could have 10 reasons why it would raise OhMyGod
<niemeyer> fwereade: You have to cover it all in its tests
<fwereade> niemeyer: ...ok, yes, I only need to hit one of the sad paths in a distant test
<niemeyer> fwereade: If the client code has to take care of OhMyGod, you just the fact it can cope with it
<niemeyer> just test
<fwereade> niemeyer: it just feels like, in the average case, foo_method will have one or two reasons for raising OhMyGod, and one reason to raise Gadzooks, and one to raise JiminyJillikers
<niemeyer> fwereade: Preferably, not mocking
<niemeyer> fwereade: But we've covered that
<niemeyer> fwereade: Are you saying you have an abstraction that isn't useful and should be removed?   I can buy that :-)
<fwereade> niemeyer: (indeed, but with external systems we're kinda forced down that path)
<fwereade> niemeyer: [JiminyJillikers] ...and so the only stuff we can avoid is one of the OhMyGod paths ...and that applies for every single thing that uses foo_method
<fwereade> niemeyer: I don't quite follow the abstraction comment
<fwereade> niemeyer: anyway, I think you have clarified your position, and I'm happy to follow your lead; it's just taking me a little while to get used to the project style, it's rather different to what I'm used to
<fwereade> niemeyer: thanks :)
<niemeyer> fwereade: No problem, and I know where you're coming from
<niemeyer> fwereade: In part, I'd be way less worried about that if we were not using a dynamic language
<niemeyer> fwereade: Uncovered code in a dynamic language is a bomb
<niemeyer> fwereade: I'd rather have duplication than increase the risks of having untouched logic 
<fwereade> niemeyer: yeah, that's a perfectly reasonable position :)
<fwereade> niemeyer: well, then, I've got some tests to write :)
<fwereade> niemeyer: (and I think I'll be beefing up the coverage in some of the pre-approved branches stacked on top of this one before I merge them, too)
<niemeyer> fwereade: That sounds awesome indeed, thanks a lot, and sorry for the extra pain
<fwereade> niemeyer: don't worry, solid testing is worth a bit of extra typing... you make solid arguments for the strategy, and all my objections are essentially what-ifs :)
<fwereade> niemeyer: I think I will need to do quite a lot of rearrangement though, it'll take a little while :(
<niemeyer> fwereade: No worries.. you've been cranking in a good pace
<niemeyer> fwereade: If you have to take a moment or two to rearrange in a way you feel is more suitable for the problem we have, that sounds positive
<fwereade> niemeyer: I'm suddenly not even sure it's a good idea: I *could* test some of the more distant things a bit less, but I suspect the best time to fix that is in the future
<fwereade> niemeyer: I'll see how it goes
<niemeyer> fwereade: That sounds reasonable too
<niemeyer> fwereade: I'm not personally worried that you might be testing too much :-)
<fwereade> niemeyer: haha :)
<niemeyer> "mgo enables us to blazingly serve more than 1.000.000 book covers a day while reducing our servers load" - Feedbooks.com
<niemeyer> Awesomeness
<fwereade> niemeyer: cool D:
<fwereade> er, :D
<hazmat> g'morning
<hazmat> argh.. just lost my reply to namespaces..
<hazmat> aha.. there's still a copy in a temp file
<noodles775> Is there a relation-set (equivalent of relation-get?). I'm just trying to find in the docs how I provide my ip to the postgresql db-relation-changed hook from my own hook?
<RoAkSoAx> fwereade: howdy!!
<fwereade> RoAkSoAx: heyhey :)
<niemeyer> noodles775: Yeah, relation-set is exactly the name
<noodles775> Sweet, trying now.
<RoAkSoAx> fwereade: how's it going
<niemeyer> noodles775: This document may be helpful: https://ensemble.ubuntu.com/docs/formula.html
<niemeyer> noodles775: Look for relation-set there
<fwereade> RoAkSoAx: not too shabby... got a lot of tests to write this afternoon to firm up some of my languishing branches
<fwereade> RoAkSoAx: and yourself?
<RoAkSoAx> fwereade: pretty good
<RoAkSoAx> fwereade: so we have ensemble deploying
<RoAkSoAx> fwereade: then only things that seems to be complaining about is not yet have the shutdown methods
<fwereade> RoAkSoAx: in theory, you shouldn't have to worry about those, because they exist
<noodles775> niemeyer: Ah, I was searching https://ensemble.ubuntu.com/docs/write-formula.html (where it's not mentioned). Thanks.
<RoAkSoAx> fwereade: well, when deploying a machine, you can see in the provision log that there's an error and the code fails to execute :)
<fwereade> RoAkSoAx: sadly they're (1) still in the process of being turned into something I can propose (trying to avoid all duplication with ec2) and (2) stacked up behind branches that aren't accepted yet
<fwereade> RoAkSoAx: why do we need to shut stuff down when we're deploying?
<RoAkSoAx> fwereade: yeah no worries, it is not a deal braker for me at the moment as I can still deploy
<fwereade> RoAkSoAx: cool :D
<m_3> noodles775: here's an example of that very task... http://pastebin.com/utdiAHSs
<RoAkSoAx> fwereade: I don't, but when I deploy it executes one of those functions for some reason
<RoAkSoAx> fwereade: and I have to restart ensemble to be able to actually deploy
<fwereade> RoAkSoAx: that's bizarre...
<RoAkSoAx> on the zookeeper
<RoAkSoAx> fwereade: indeed
<fwereade> everyone else: any guesses?
<RoAkSoAx> fwereade: other than that, I found an error of complaining that "something needs to be str instead of unicode" when doing a "PUT"
<noodles775> m_3: Ah, thanks!
<fwereade> RoAkSoAx: oof, good catch
<RoAkSoAx> fwereade: and that something is the url
<fwereade> RoAkSoAx: would you file a bug for that one please, that stuff's in trunk now
<fwereade> RoAkSoAx: I presume it doesn't happen all the time?
<RoAkSoAx> fwereade: no only when trying to store the formula on the webdav
<RoAkSoAx> fwereade: what I did is simply url = str(url)
<fwereade> RoAkSoAx: the formula url is always unicode then?
<RoAkSoAx> fwereade: appears to be
<fwereade> RoAkSoAx: good to know, thanks
<fwereade> RoAkSoAx: hmm, yeah, there are unicode tests for the s3 FileStorage :/
 * fwereade hangs his head
<hazmat> fwereade, the unicode gets introduced by yaml.load
<fwereade> hazmat: ah, makes sense
<hazmat> against the formula metadata, used for constructing the key to storage
<m_3> noodles775: sure thing... lemme know if I can help
<noodles775> m_3: I wonder if it's even worth including that example in the README.markdown?
<m_3> noodles775: we're workign on the best way to document formulas and relation interfaces
<noodles775> m_3: Great - let me know if I can help.
<m_3> noodles775: that particular interface is simple enough to put it in the readme... good idea, I'll do that for now
<_mup_> Bug #821493 was filed: tilda-expansion for deploy repository value <Ensemble:New> < https://launchpad.net/bugs/821493 >
<RoAkSoAx> fwereade: ok, so I'll prepare a branch for you
<RoAkSoAx> fwereade: so that we can have a branch ready for next week
<RoAkSoAx> fwereade: which I presume the shutdown stuff willa lso be there
<fwereade> RoAkSoAx: cool; I hope I'll get to it before Monday, but it won't be today :(
<RoAkSoAx> fwereade: no worries
<RoAkSoAx> fwereade: we just need a working branch by monday to work on that all week
<niemeyer> I've got 5 calls this morning, 2 door bells, 1 family visit, ... slightly harder to focus today.
<noodles775> m_3: Hrm, I'm not sure what's ambiguous? http://paste.ubuntu.com/659326/
 * noodles775 looks more closely at the error message.
<m_3> noodles775: there're two relations in postgres
<m_3> noodles775: db and db-admin
<m_3> noodles775: so you need something like 'ensemble add-relation postgres:db ogt
<m_3> '
<noodles775> m_3: I see, thanks.
<m_3> noodles775: did the relation complete?
<m_3> kim0: sorry, didn't see that there were other sheets in that workbook
<kim0> it's ok .. too much is better than too little :)
<m_3> kim0: move stuff whereever you need to
<RoAkSoAx> fwereade: ping
<fwereade> RoAkSoAx: pong
<RoAkSoAx> fwereade: what does this do and is it really necessaru?
<RoAkSoAx> http://pastebin.ubuntu.com/659354/
<RoAkSoAx> fwereade: cause if I return connect.run() it still works
<fwereade> RoAkSoAx: I'm not totally convinced by that _run_operation thing, I feel we should either wrap it around everything or skip it entirely
<RoAkSoAx> fwereade: ok, so in the orchestra stuff will only return connect.run() then (which is what I'm using right now
<robbiew> hazmat: ping
<fwereade> RoAkSoAx: you certainly don't need to worry about it in your tree, I'll figure out what to do with it when I have to ;)
<RoAkSoAx> fwereade: ok cool. Do you want me to include the url = str(url) on the branch?
<fwereade> RoAkSoAx: yeah, I think it's my responsibility to make sure I do the right thing
<hazmat> robbiew, pong
<robbiew> hazmat: so I'm an idiot and forgot about our call...again
<robbiew> hazmat: given we are meeting all next week and just spoke...I don't have anything critical, but if you want to talk...I'm game
<hazmat> robbiew, no worries, i've got an open time today, if you want to do it adhoc today, else next week
<hazmat> robbiew, next week sounds good
<robbiew> cool deal....and bring shorts!!!
<robbiew> high today: 107
<hazmat> ugh..
 * hazmat plans on  running at high speeds between a/c environments
<robbiew> you could die in that run
<robbiew> lol
<m_3> wow... that's impressive
 * m_3 trying to imagine laptops and power strips at the pool bar
<hazmat> otoh, i guess i should be used to it by now, washington, dc hit a heat index of 118 a few weeks ago, high humidity, actual temp around 105.. lots of hot air in dc ;-)
<robbiew> hazmat: heh, yeah
<robbiew> I have family there, and used to go every Summer as a kid
<robbiew> the only difference is that in the evening, it generally cools down....not here :/
<RoAkSoAx> fwereade: lp:~andreserl/+junk/ensemble-sprint
<robbiew> some towns/cities have gone 40+ days of +100....damn dust bowl
<RoAkSoAx> here it is 91F with 65% humidity
<robbiew> yuk..that's jungle heat
<heckj> sounds like a US midwest temp, except maybe a bit dry...
<RoAkSoAx> I wish I was leaving right in front of the ocean
<niemeyer> Lunch time here
<fwereade> eod, later all :)
<robbiew> fwereade: peace out!
<kirkland> howdy peeps ... in case you missed it elsewhere, here's a good read on Orchestra + Ensemble http://blog.dustinkirkland.com/2011/08/formal-introduction-to-ubuntu-orchestra.html
<hazmat> kirkland, nice
<kirkland> kim0: ping
<robbiew> bcsaller: hey...just realized I missed our call...but given we just spoke as a team and have all next week together, I figured it was cool
 * robbiew is batting two for two on missing 1:1s
 * robbiew notes not to reschedule 1:1s on Fridays
<bcsaller> robbiew: no problem, we'll see each other next week anyway
<robbiew> cool deal...well, not "cool" heh
<_mup_> ensemble/security-connection r290 committed by kapil.thangavelu@canonical.com
<_mup_> ssh client uses single inheritance to get policy enforcement.
<niemeyer> kirkland: Very nice post
<_mup_> ensemble/security-connection-redux r287 committed by kapil.thangavelu@canonical.com
<_mup_> merge security connection redux and trunk
<kirkland> Nicke: thanks!
<_mup_> ensemble/security-groups r292 committed by kapil.thangavelu@canonical.com
<_mup_> resolve conflict from security-connection merge.
<_mup_> ensemble/security-otp-principal r295 committed by kapil.thangavelu@canonical.com
<_mup_> resolve conflict from security-connection merge
<_mup_> ensemble/security-acl r300 committed by kapil.thangavelu@canonical.com
<_mup_> resolve conflict from security-connection merge
<_mup_> ensemble/states-with-principals r317 committed by kapil.thangavelu@canonical.com
<_mup_> resolve conflict from security-connection merge
<_mup_> ensemble/trunk-merge r275 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<_mup_> ensemble/security-groups r294 committed by kapil.thangavelu@canonical.com
<_mup_> address review comments, conflicts are silent (same goal state reached), duplicate state creation errors are runtime-exceptions.
<_mup_> ensemble/security-acl r302 committed by kapil.thangavelu@canonical.com
<_mup_> resolve security-group merge conflict.
<_mup_> Bug #821621 was filed: add custom commands to the cli <Ensemble:New> < https://launchpad.net/bugs/821621 >
<_mup_> ensemble/trunk r295 committed by kapil.thangavelu@canonical.com
<_mup_> merge security-groups [r=niemeyer,fwereade][f=814260]
<_mup_> Implements a security group as a stored/persistent principal
<_mup_> with membership denoting acl to utilize the group node, 
<_mup_> and thus credential usage on a connection.
<hazmat> hmm. i mislabed that last merge
<hazmat> i just accidentally committed security-connection with the commit message from security-groups.. 
<hazmat> not a big deal practically, but the commit message is wrong, just curious if there is a way to fix the message
<niemeyer_> hazmat: No way to fix it
<niemeyer_> hazmat: You'd have to uncommit..
<niemeyer_> hazmat: The potential disaster of doing so isn't worth the trouble
<hazmat> its a remote repo, i'll just note in the next commit message
<niemeyer_> hazmat: THat's a good plan
<_mup_> ensemble/trunk r296 committed by kapil.thangavelu@canonical.com
<_mup_> merge security-groups [r=niemeyer,fwereade][f=814260]
<_mup_> Previous merge (r295) was of security-policy-connection
<_mup_> Implements a security group as a stored/persistent principal
<_mup_> with membership denoting acl to utilize the group node, 
<_mup_> and thus credential usage on a connection.
<_mup_> ensemble/states-with-principals r319 committed by kapil.thangavelu@canonical.com
<_mup_> resolve conflict from merge of security-group
<_mup_> ensemble/security-policy-rules-redux r325 committed by kapil.thangavelu@canonical.com
<_mup_> resolve conflict from security-group merge
<_mup_> ensemble/security-policy-rules-redux r326 committed by kapil.thangavelu@canonical.com
<_mup_> resolve conflict with security-groups merge
<hazmat> niemeyer, what do you think about using blueprints for the next milestone
<hazmat> i'd like a better way of capturing working in progress on multi-branch features than what the kanban view gives us
<hazmat> s/working/work
<niemeyer> hazmat: I feel we shouldn't introduce additional burden without taking something out now
<niemeyer> hazmat: For features, we have specs + a branch per change + a bug per branch + a merge proposal per branch
<niemeyer> hazmat: We need a holistic view on that workflow and to understand what's the role of any additions, before going into it
<niemeyer> hazmat: The conversation about the task management the other day might solve your worries on that area
<niemeyer> hazmat: But if we introduce it, my suggestion was to replace the kanban and the hand management of MP state out
<hazmat> niemeyer, good point, i'm abusing bug tags to keep track of all the security work now, but we probably don't need more process atm
<niemeyer> hazmat: and bugs too, actually
<hazmat> niemeyer, what if the kanban viewhad management of state inline to the ui
<niemeyer> hazmat: When the bug is artificial
<niemeyer> hazmat: Wouldn't solve.. our current workflow lacks information ATM.. doing the same workflow in a different location wouldn't solve the issue
<niemeyer> hazmat: Nowhere we track the "This is pending a re-review from Kapil", nor do we have a way to say "These are the things the team is waiting on you to do"
<niemeyer> hazmat: So, my ideal change would be taking Kanban + Artificial bugs + MP state management out, and replace it by a _good_ task management + calendar software that includes task assignment and team-scoped view
<niemeyer> hazmat: I don't have a good proposal for that system yet, but I'm looking
<niemeyer> hazmat: We can end up implementing it, but I'd rather not
<hazmat> niemeyer, i mean pivotal tracker is nice.. but i think at the end of the day, we'll probably need something custom ontop of lp
<niemeyer> hazmat: Or less of Launchpad
<niemeyer> hazmat: For the project management aspects
<hazmat> niemeyer, perhaps but unless its a tool that works well with an opensource community, a commercial tool would need to allow participation from the public, and ideally see our branches on lp, i'm not sure its ever going to outweigh lp
<hazmat> and then we just end up dividing data
<hazmat> better to build the ui we want onto of lp imo
<niemeyer> hazmat: We have zero community contributors right now
<niemeyer> hazmat: Integrated into the workflow of development
<niemeyer> hazmat: and if/when we do have, we can certainly sponsor them in whatever system we have
<niemeyer> hazmat: There's really no friction in that regard
<hazmat> niemeyer, we're just beginning our community.. every road block to participation, halfs (at least) participation.
<hazmat> theirs tons of market/ui architect research to support that
<niemeyer> hazmat: Sorry, I don't really understand the point you're making
<niemeyer> hazmat: How's a good task/project management solution bad in any way for the community?
<niemeyer> hazmat: I'm trying to take the burden out of workflow, simplifying it and making it more enjoyable.. hard to believe the community with be bothered with it.
<hazmat> niemeyer, your removing the relevant word, commercial tools for ensemble, need to be publicly available, commercial projects are typically billed against customers, we need public access to decrease barriers of entry, 
<hazmat> s/commercial project tools
<niemeyer> hazmat: Heh
<niemeyer> hazmat: Let's start by not using EC2 then.
<niemeyer> hazmat: Or Google
<hazmat> niemeyer, commercial resources consumed as the cost of using the project are fine, the later is atm private team meetings
<niemeyer> hazmat: Google, search..
<niemeyer> hazmat: Gmail.. Google+
<hazmat> is a publicly available tool
<niemeyer> hazmat: Yes.. and I'm saying we'll make whatever tool we use available to anyone who wants to participate
<niemeyer> hazmat: How's that different?
<niemeyer> hazmat: We're producing software.. under great licenses
<niemeyer> hazmat: We're in a public channel.. our list is public.. Ubuntu is open
<niemeyer> hazmat: If we have to use a commercial tool to handle a less relevant aspect of the project management, we will
<hazmat> niemeyer, that's not what you said at least re project management tools, afaics, correct me if i'm wrong
<niemeyer> hazmat: Just like we use Google search, and EC2
<niemeyer> hazmat: and Google+, and used Skype for conferences
<hazmat> private team meetings aren't the issue, the work that tracks what in release is
<niemeyer> hazmat: We're pushing the envelope making good free software available for people, and using these tools does not compromise our goal in any way
<hazmat> what!?
<niemeyer> Erm..
<niemeyer> Sounds like that conversation isn't being positive.
<hazmat> i don't think we're pushing the envelope in the scope of free software
<niemeyer> Oh, really?  You don't think Ensemble is pushing the envelope in terms of free software?
 * niemeyer misses the point
<hazmat> we're creating a new unique tool that's great.. but pushing the envelope of actually putting out free softare?
<niemeyer> hazmat: Isn't Ensemble free software?  I'm lost now..
<hazmat> but we're talking about making the process less free to the public
<hazmat> and pushing the envelope?
<niemeyer> hazmat: And I'm talking about Ensemble, the product we develop.
<hazmat> llvm pushes the envelope, sqlite pushes the envelope, openstack is pushing the envelope...  many projects push the envelope.. all free, all quite easy to contribute to, because they have public tools, i think your conflating the question..
<hazmat> comment actually, that any tool we use should be publicly available
<hazmat> to enable participation
<niemeyer> hazmat: All of these push the envelope, and Ensemble does as well.
<hazmat> niemeyer, that's not the question.. or the discussion. let's not make it one. i agree.
<hazmat> the question is how do we get better tools, that do what we want?
<niemeyer> hazmat: That's my point.  My goal is to develop Ensemble, and to make it rock, and to make it free.
<niemeyer> hazmat: If I have to use Google search for that, even though it's closed source software, I will.
<niemeyer> hazmat: If I have to use a closed source communication software (or hardware, for that matter) so that I can communicate with people, I will.
<hazmat> to enable or foster a development community those tools that we use to produce it should be available for public participation.
<niemeyer> hazmat: I'm not solving those problems.. I'm not interested in developing good task management software right now (maybe some day).
<robbiew> I think the issues are being mixed up
<niemeyer> hazmat: I am very interested in the community.  If you think that to build a community you need to be using open source software in all edges, you're very mistaken.
<niemeyer> hazmat: Google+ has 25M users today.
<niemeyer> hazmat: If you want to continue that conversation, let's do so next week.
<hazmat> niemeyer, i didn't say opensource at the edges, i said publicly available
<robbiew> right, and Google+ doesn't require users to pay for it...which is where I think hazmat is going
<robbiew> it's publicly available to use
<hazmat> the data we have and automatically collect at lp is valuable
<niemeyer> robbiew, hazmat: and I just said repeatedly we'll make them available to anyone who's interested in participating.
<niemeyer> </rant>
<robbiew> right...i think you are agreeing more than you know ;)
<hazmat> niemeyer, "<niemeyer> hazmat: Yes.. and I'm saying we'll make whatever tool we use available to anyone who wants to participate" is not the same as publicly available
<hazmat> its selective
<niemeyer> hazmat: "anyone" is selective.. please stop the bikeshed
<hazmat> niemeyer, i don't think its a bikeshed.. although perhaps a misunderstanding.. the difference  between people we select ("make available") and register in a tool vs. available to non-registered users (aka anyone. the public)  is a rather different selection criteria.
<niemeyer> hazmat: "anyone" != "people we select".. let's please stop.  There's no tool.
<niemeyer> hazmat: Basecamp allows projects to be available openly, as a trivial example of the lack of reason for any kind of discussion before we have a proposal.
<hazmat> niemeyer, sounds good, sometimes its good to have evaluation criteria before a proposal begins
<niemeyer> hazmat: Heh.. thanks for bashing my lack of proposal to death.
<niemeyer> I'm stepping out for the day.. I'll see you guys next week.
<_mup_> ensemble/security-otp-principal r298 committed by kapil.thangavelu@canonical.com
<_mup_> speling suggestions and use of runtimerror instead of valueerror per review comments.
<_mup_> ensemble/security-otp-principal r299 committed by kapil.thangavelu@canonical.com
<_mup_> update tests with new exceptions
<_mup_> ensemble/trunk r297 committed by kapil.thangavelu@canonical.com
<_mup_> merge security-otp-principal [r=niemeyer,fwereade][f=814320]
<_mup_> Implements an OTP Principal, that stores the credentials
<_mup_> for a named persistent principal in a node protected
<_mup_> by an ACL referencing a separate OTP credentials. The
<_mup_> credentials and path to the OTP node can be serialized
<_mup_> and passed around to their consumption target. The OTP
<_mup_> node is destroyed upon API usage. 
<_mup_> This is not a true OTP, but the best we can do atm without
<_mup_> extension of zookeeper. See bug: 819379 for tracking this issue
<_mup_> and merge proposal comments on this branch for further reference.
<_mup_> https://code.launchpad.net/~hazmat/ensemble/security-otp-principal/+merge/68754
<_mup_> ensemble/states-with-principals r321 committed by kapil.thangavelu@canonical.com
<_mup_> make acl.grants for the same principal additive in nature, removes a potential source of conflict.
<_mup_> ensemble/security-acl r304 committed by kapil.thangavelu@canonical.com
<_mup_> make acl.grants for the same principal additive in nature, removes a potential source of conflict.
<_mup_> ensemble/trunk r298 committed by kapil.thangavelu@canonical.com
<_mup_> merge security-acl [r=niemeyer,fwereade][f=816108]
<_mup_> Implement an ACL abstraction for manipulating a zk node's ACL.
<hazmat> have a good weekend folks, cheers
<niemeyer> Time for pre-sprint bill-paying
#ubuntu-ensemble 2011-08-07
 * hazmat boards plane to sprint
<jcastro> anyone around at the sprint hotel?
<bcsaller> jcastro: yeah, still around?
