#ubuntu-ensemble 2011-08-29
<_mup_> ensemble/expose-cleanup r337 committed by jim.baker@canonical.com
<_mup_> Test new securitygroup function
<_mup_> ensemble/expose-cleanup r338 committed by jim.baker@canonical.com
<_mup_> Merged trunk & resolved conflicts
<_mup_> ensemble/expose-cleanup r339 committed by jim.baker@canonical.com
<_mup_> Fixed merge issues
<niemeyer> Greetings!
<hazmat> niemeyer, greetings
<hazmat> i do find the abbreviations to make reading the go branches a bit harder to read
<hazmat> it would help if the abbreviations where at least documented once
<hazmat> +type ifaceExpC struct {
<hazmat> is interface expander c?
<hazmat> the variable declaration for the type uses stringC which i assume to be string constant.. but thats a different usage than the preceding statement that uses C for ?
<niemeyer> hazmat: Checker
<niemeyer> hazmat: That convention was used on every single type within schema.go
<niemeyer> hazmat: I can rename it if it feels so tricky
<niemeyer> hazmat: It's akin to _state, though
<hazmat> niemeyer, its just that without that context it becomes harder to understand, i don't know that it needs renaming but a comment would be useful
<niemeyer> hazmat: I'm happy to rename or to add the comment
<niemeyer> hazmat: // The *C convention is used within schema.go for Checkers
<niemeyer> ?
<hazmat> niemeyer, what's schema.M?
<niemeyer> hazmat: That's documented
<niemeyer> hazmat: 
<niemeyer> / All map types used in the schema package are of type M.
<niemeyer> type M map[interface{}]interface{}
<hazmat> niemeyer, ic
<niemeyer> hazmat: This is just a handy alias to this type that allows building maps as
<niemeyer> hazmat: schema.M{"a": 1}, etc
<niemeyer> hazmat: map[interface{}]interface{}{"a": 1} would work, but.. :)
<hazmat> indeeds its an improvement
<niemeyer> hazmat: Yeah, this became clear in tests
<niemeyer> hazmat: I wasn't using it before, but when I started to test the results in the formula work, got boring
<hazmat> niemeyer, is this sort of multi-file lookup for abbreviations common in go?
<niemeyer> hazmat: multi-file lookup?
<hazmat> niemeyer, abbreviation used in one place, but needs to be tracked down to a different file to understand naming?
<niemeyer> hazmat: Not a different file.. this is a package, and the type is defined and has meaning within the package itself
<niemeyer> hazmat: schema.M is used for schemas only
<niemeyer> hazmat: Just for comprehension, note that the file could be named foo.go
<niemeyer> hazmat: The "schema" in "schema.M" comes from the package name, not the file name
<niemeyer> hazmat: In other words, I wouldn't use schema.M in things not related to the schema itself
<hazmat> niemeyer, their not different packages? one is ensemble/go/schema and ensemble/go/formula? 
<niemeyer> hazmat: These are different packages, yeah
<niemeyer> hazmat: In these terms it's quite similar to Python.. if you use ensemble.environment.Environment, you may go to ensemble.environment to see what's that
<hazmat> niemeyer, so in this case we have one package's abbreviation being used in another package, the question is if that sort of usage is common?
<niemeyer> hazmat: The distinction, as I was pointing out, is that unlike Python packages are multi-file
<niemeyer> hazmat: It's not an abbreviation.. it's a type
<niemeyer> hazmat: and the reason why the type is used in a different file is because we're using _schema_ in the other package
<hazmat> niemeyer, the name itself is an abbreviation, its the name for a type
<niemeyer> hazmat: It's not an abbreviation, it's a real type
<niemeyer> hazmat: reflect.TypeOf(schema.M{}) != reflect.TypeOf(map[interface{}]interface{}{})
<hazmat> "M" is not an abbreviation for MapType?
<niemeyer> hazmat: Sorry, I see what you mean
<niemeyer> hazmat: Yes
<niemeyer> hazmat: Let me try to drive a parallel to show you what I mean
<hazmat> i think this would be much readable if we avoided cross package abbreviations, i'm just trying to understand if this usage is common in go
<niemeyer> hazmat: Alright.. I'll put map[interface{}]interface{}{} back then
<hazmat> niemeyer, i don't think thats any better, but is there a better name for the type?
<niemeyer> hazmat: schema.M?
<niemeyer> hazmat: That was my take on it.. makes building the map easier
<niemeyer> hazmat: I've used the same convention in other marshalling modules
<niemeyer> hazmat: But I won't bikeshed on that.. if you have other suggestions, I can change
<hazmat> niemeyer, schema.SchemaMap ?
<niemeyer> hazmat: Heh
<niemeyer> hazmat: Duplicating the schema name won't make it handier or more readable in any way
<niemeyer> hazmat: Also, what's the difference between schema.SchemaMap and schema.Map?
<niemeyer> hazmat: I can rename schema.Map to ...
<hazmat> MapChecker
<niemeyer> hazmat: schema.GenericMap
<hazmat> yeah.. that works as well, and M->Map
<niemeyer> hazmat: I can do that as well, as long as you never complain again about naming things with _state.. ;-D
<niemeyer> hazmat: Imagine how the schema definition would look like..
<hazmat> niemeyer, sounds good as long as there no complaints about a -(_state) branch ;-)
<niemeyer> hazmat: schema.MapChecker(schema.StringChecker(), schema.InterfaceChecker(....))
<hazmat> yeah.. that's lame
<niemeyer> hazmat: Ok, I'll do the name change
<niemeyer> hazmat: But in a follow up branch if that's alright
<niemeyer> Since it'll touch everything else
<hazmat> niemeyer, feel to change it a different name, i just think having a common entry point into a package be an abbreviation also feels like a violation of encapsulation
<niemeyer> hazmat: I don't understand how that could be the case
<niemeyer> hazmat: There's zero encapsulation involved
<hazmat> because it tends to require reading the relevant implementation to understand its use
<niemeyer> hazmat: The only reason we use schema.M in formula is because we're dealing with the schema there
<niemeyer> hazmat: I also don't think this is true.. you don't have to read the implementation to understand its use.. you have to read the documentation, which is a pretty general problem 
<niemeyer> hazmat: It's like saying we need to read Python's json.dumps prototype to understand how to call it
<hazmat> what it is and what its doing are fairly obvious by name alone, schema.M is not 
<niemeyer> hazmat: Agreed.. it's a convention to be aware of
<hazmat> so if your reading code that uses both, one is pretty clear without additional context and one is not
<niemeyer> hazmat: The same convention is used in gobson and goyaml
<niemeyer> Actually, I lie.. gobson only it seems
<niemeyer> Anyway.. I was explaining the POV
<niemeyer> I'm fine with renaming it
<niemeyer> hazmat: and answering your very first and simple question, no, this is not a general convention, and if it becomes one it's my fault ;-0
<niemeyer> ;-)
<hazmat> niemeyer, thanks, good to know
<_mup_> ensemble/go-formulas r8 committed by gustavo@niemeyer.net
<_mup_> Renamed schema.M/L to schema.Map/ListType
<_mup_> Bug #836753 was filed: schema.M/L are not readable <Ensemble:In Progress by niemeyer> < https://launchpad.net/bugs/836753 >
<niemeyer> fwereade: Gosh.. is all of that really needed for adding _http auth_?
<SpamapS> http://ec2-107-20-64-136.compute-1.amazonaws.com:8080/job/tested%20PPA/42/console
<SpamapS> So, other than the failure at the end because I forgot to quote the 'cd' ... first iteration of integration tests went well.
<SpamapS> lp:~clint-fewbar/ensemble/jenkins-test-suite has the code under misc/jenkins
<SpamapS> I need to break it up into 3 or 4 bash scripts.. but the interesting parts are wait4state and unit2machine .. which I think should become subcommands or at least helper args to some of the commands
<SpamapS> wait4state especially would be best written as a zookeeper watch
<SpamapS> One thing I was struggling with was whether there was a way to address yaml nodes similar to xpath for xml.
<SpamapS> The way that wait4state works is pretty weird and crazy.. but it works for this specific job
<jimbaker> SpamapS, taking a look
<jimbaker> SpamapS, we should definitely make wait4state easier to do. as you said, it really should be using zk watch support
<jimbaker> SpamapS, re some sort of xpath - really this is some sort of query syntax for working w/ python collections. there must be a number of packages out there to do just that
<jimbaker> biab, i have to go to a quick doctor's appt
 * SpamapS will have to vacate his house and go work at a co-working space in about an hour while the exterminators work. UGH
<fwereade> niemeyer: sorry, had to pop out -- and, as far as I can tell, yes it is
<fwereade> niemeyer: I looked pretty hard for alternatives I could piggyback on, but nothing seemed cleanly doable
<SpamapS> fwereade: were you at least able to configure the webdav service to auth against cobbler, so the auth credentials can remain the same?
<fwereade> SpamapS: hm, I didn't think to try that -- if you don't specif separate cobbler credentials, it falls back to the orchestra ones
<fwereade> SpamapS: same model as the configurable storage-url -- if you don't include one, it assumes http://%(orchestra-server)s/webdav
<fwereade> SpamapS: but I have no idea how to set up cobbler and webdav with a shared auth source
<SpamapS> fwereade: I forget what our default is, but https://fedorahosted.org/cobbler/wiki/CustomizableAuthentication has a run down on the auth methods possible.. authn_passthru would be the simplest way to get this done.
<SpamapS> fwereade: actually authn_configfile will also work because we can point the webdav service at the same config
<fwereade> SpamapS: easier than I thought then :)
<fwereade> niemeyer: https://code.launchpad.net/~fwereade/ensemble/webdav-unicode-paths/+merge/72579 -- did you mean to mark that as approved for merge, or should I get someone else to give it a look?
<SpamapS> fwereade: how well does urllib2 handle https btw?
<SpamapS> fwereade: is it going to squawk because of self signed certs or anything like that?
<fwereade> SpamapS: hm, I have no idea
<SpamapS> fwereade: I'd say that initially it *should* complain with invalid certs, but that we should make it easy to turn that off given Orchestra's "backoffice" role.
<kirkland> SpamapS: certs for what?
<fwereade> SpamapS: wounds reasonable, but I'm not sure what context you're asking about
<SpamapS> kirkland: for the webdav file storage
<kirkland> SpamapS: RoAkSoAx is already using orchestra + preseed to install SSL certs in deployed guests
<kirkland> SpamapS: for SSL protected rsyslog
<kirkland> SpamapS: you should be able to just force the webdav to use those same certs
<SpamapS> Seeing as formulas run as root and have no cryptographic verification built in.. its fairly important to consider the security implications they carry.
<kirkland> SpamapS: or use the same mechanism to install additional certs
<SpamapS> Yeah that sounds good.
<SpamapS> Give the admin a way to easily pull that in to his local cert store.
<SpamapS> I think by default it should be easy and just work w/o intervention. I also think that if the admin needs/wants to crank up the security, they should be able to.
<SpamapS> (And I'm sure some people will argue that by default it should be high security, and admins should be able to turn off security)
<niemeyer> adam_g, negronjl, lynxman: ping
<negronjl> niemeyer: pong
<niemeyer> negronjl: Hey man
<negronjl> niemeyer:  Happy Monday ( if there is such a thing ) :)
<niemeyer> negronjl: Was just wondering about how you guys feel about how the package and code stability has been progressing
<niemeyer> negronjl: Are you happy with it in general, or do you feel we've been breaking things too often somehow?
<negronjl> niemeyer:  Althought I am very happy with the project in general, I would feel more confortable with some stable version of ensemble that, may not have all of the latest and greates features, but, it is stable.
<niemeyer> negronjl: That's a bit different from the actual question.. I would like to have a stable version too :-)
<negronjl> niemeyer:  I found myself with a very close call on NoSQL conference where I was scheduled to talk about Ensemble but, ensemble was broken
<niemeyer> negronjl: The question is whether we've been breaking things often
<negronjl> niemeyer:  not often no.
<niemeyer> negronjl: Ok, cool.. I understand the desire for stability, and we're doing things to improve that
<niemeyer> negronjl: I just want to make sure I understand the feeling
<negronjl> niemeyer:  To clarify my earlier point.  It would help if there is a version in ensemble that would be considered stable enough and we can somehow pin it ( a /stable ppa or so ) until another version becomes stable enough and can replace it.  This would alleviate some of the issues that I experienced during NoSQL last week.
<negronjl> niemeyer:  Just my thoughts.
<niemeyer> negronjl: Sounds good.. I hope this is the version in Ubuntu
 * SpamapS too
<SpamapS> negronjl: to niemeyer's point, the version in Ubuntu is actually pretty stable. :)
<jcastro> yes that's what I was using and it was working fine
<adam_g> assuming whats in the archive is stable, it would good if nodes install straight from the archive by default, with the option to install a bleeding edge versin from ppa:ensemble/ppa (default now). 
<adam_g> niemeyer: ^
<niemeyer> fwereade: I did, sorry
<fwereade> niemeyer: thanks, no worries
<niemeyer> fwereade: Marked it now
<niemeyer> fwereade: Sad about the auth complexity :(
<fwereade> niemeyer: cheers
<fwereade> niemeyer: me too :(
<niemeyer> fwereade: It's _auth_, for dev's sake
<fwereade> niemeyer: I came to terms with it eventually, but... quite
<ahasenack> adam_g: you mean the nodes install something else? Like, trunk?
<adam_g> ahasenack: i just bootstrapped a node using 0.5+bzr330-0ensemble1, and userdata sets up access to the ppa instead of installing out of the archive
<ahasenack> adam_g: is that ppa "bleeding edge"?
<adam_g> ahasenack: honestly do not know how/when/why packages get built and pushed there, but installing from there is where ive personally hit show stopper bugs.
<hazmat> ahasenack, the ppa is a trunk daily build
<ahasenack> ok
<niemeyer> fwereade: Can't we use the same technique twisted.web._auth.digest uses, and rely on twisted.creds.credentials?
<niemeyer> fwereade: It feels like you're reimplementing it
<fwereade> niemeyer: hm, I was put off by the privateness of the module, and assumed that something would end up breaking if I used a _kindaprivate module
<niemeyer> fwereade: The _auth module has pretty much nothings inside it
<fwereade> hm, I thought that was where the meat of it was
<fwereade> niemeyer: let me check again
<niemeyer> fwereade: You're allowed to look inside private modules! ;-0
<niemeyer> :-)
<hazmat> "_auth" ;-)
<hazmat> there's a bug open for it
<hazmat> it might be nice to ask therve or free
<hazmat> or on #twisted
<niemeyer> hazmat: There's nothing interesting inside it
<fwereade> niemeyer: well, crap, somehow I missed cred.credentials
<niemeyer> hazmat: Besides a pointer to twisted.cred.credentials
<fwereade> niemeyer: I, er, did look :/
<fwereade> niemeyer: maybe I stopped looking at that point though :(
<niemeyer> fwereade: Don't worry about it
<niemeyer> fwereade: Maybe it's not even useful.. but certainly has very similar logic, so worth checking
<fwereade> niemeyer: definitely so, I'll give it a look
<niemeyer> fwereade: In either case, the auth logic itself should be factored out of FileStorage
<niemeyer> fwereade: and the file
<niemeyer> fwereade: and put into its own module, and tested in isolation
<fwereade> niemeyer: it's not immediately apparent how to hook it up, but you're right, it's doing just the right thing
<niemeyer> fwereade: This is a major missing piece in the http library, rather than part of the FileStorage problem
<fwereade> niemeyer: it didn't feel quite big enough to do that at the time, but you're right
<fwereade> cheers
<hazmat> as another way of getting basic auth into twisted web client ... http://code.activestate.com/recipes/525493-simple-crawler-using-twisted/
<niemeyer> hazmat: Not sure about what's the point there?
<hazmat> its a much simpler layering of basic auth onto twisted web client
<fwereade> hazmat: surely basic auth is barely worth bothering with?
<niemeyer> hazmat: Yeah, I'm trying to understand what's your underlying suggestion.. is it that we should use client.HTTPDownload?
<niemeyer> fwereade: Can we?
<fwereade> niemeyer: use basic auth?
<hazmat> fwereade, that's fair, its not secure over http, but over https its fine
<niemeyer> hazmat: Use HTTPDownloader
<niemeyer> Erm
<niemeyer> fwereade: Use HTTPDownloader
<fwereade> niemeyer: looking at it, we probably can, the only reason I didn't use getPage is because apparently I can't get at the reply headers
<niemeyer> fwereade: gotHeaders?
<fwereade> niemeyer: yes indeed, that's on HTTPDownloader
<fwereade> niemeyer: so, looking at it, we probably can
<fwereade> niemeyer: I saw things saying "if getPage isn't good enough, use Agent"... so I used Agent :/
<fwereade> niemeyer: sounds like it's a solid "needs fixing" with a side helping of "lrn2google"
<fwereade> niemeyer: at least the code'll end up smaller :)
<niemeyer> fwereade: Both approaches are surprisingly involved, to be honest
<fwereade> niemeyer: the creds stuff looks like it might be quite helpful, I'll look into that
<niemeyer> fwereade: The approach referenced by hazmat seems slightly  better because it's more encapsulated than the one in your branch
<fwereade> niemeyer: there are approaches I didn't consider, so I should definitely look into them
<niemeyer> fwereade: It should be possible to merge your auth with the simpler approach
<niemeyer> hazmat: Which is why I was asking what was the underlying suggestion
<fwereade> niemeyer: ah, I see
<fwereade> niemeyer: I think it's worth spending a little while trying to hook up the cred.credentials solution though, that would be best
<niemeyer> fwereade: Sounds good.. I'm happy with any of these options
<fwereade> niemeyer: anyway, I'm sorry to cut you off, but the calls for suppertime are becoming quite insistent
<fwereade> niemeyer: I'll try to have something better tomorrow
<niemeyer> fwereade: The only thing that we should necessarily do is to factor that problem auth of the FileStorage
<niemeyer> s/auth of/out of/
<niemeyer> Contextual typo :)
<niemeyer> SpamapS: This is sitting on our review queue for 2 weeks untouched: https://code.launchpad.net/~clint-fewbar/txaws/fix-s3-port/+merge/71289
<niemeyer> SpamapS: Can you please address the points for getting it merged, or put it back in Work In Progress otherwise?
<SpamapS> niemeyer: I started working on that about 30 minutes ago actually. :)
<niemeyer> SpamapS: WOohay
<SpamapS> niemeyer: just writing some test code. I addressed the first point before Robert made it. :)
<niemeyer> SpamapS: Great timing then :)
<SpamapS> niemeyer: It seems that all of the tests in txaws are functional tests.. I see no mocking for AWS
<niemeyer> SpamapS: I've tweaked tests there before
<niemeyer> SpamapS: There are unittests
<SpamapS> jimbaker: so.. security group cleanup...
<jimbaker> SpamapS, yes, it's about ready to land
<jimbaker> (in expose-cleanup branch)
<SpamapS> jimbaker: http://ec2-107-20-64-136.compute-1.amazonaws.com:8080/job/tested%20PPA/43/console
<SpamapS> jimbaker: is that a bug?
<jimbaker> SpamapS, expose-cleanup fixes this issue
<SpamapS> jimbaker: so I have to manually delete all the groups before I can re-use the environment?
<jimbaker> what we see here is generally seen when doing an immediate bootstrap after shutdown - the instance originally associated with this group is still around (in 'shutting-down' state) and so cannot be deleted
<jimbaker> SpamapS, once expose-cleanup lands in trunk, you won't have to anything
<jimbaker> the other possibility is to just wait sufficiently long
<SpamapS> jimbaker: the shutdown was 24 hours ago
<m_3> I just wait it out
<m_3> wow!
<jimbaker> SpamapS, something is holding onto that security group then
<m_3> usually cleans itself up after a few minutes
<jimbaker> SpamapS, so expose-cleanup simply polls the instance state to see if they're terminated. if so, it can delete the associated security groups
<jimbaker> or i should say, this polling is done in a revised shutdown_machines in ensemble.providers.ec2
<jimbaker> SpamapS, please check whether or not you can manually delete that security group
<jimbaker> using the AWS console
<jimbaker> or tool of choice
<SpamapS> how can I see what instances are in a group w/ the cmdline tools?
<jimbaker> SpamapS, i don't believe that's directly possible, but i could be wrong
<jimbaker> i know it's in the reservation part of the instance description
<SpamapS> hmm must have been transient
<_mup_> ensemble/expose-cleanup r340 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<SpamapS> its odd, my testing does shutdown+bootstrap pretty constantly.. maybe 10 seconds of lag between those two steps
<m_3> 10s is rarely enough time... I usually let it sit for a minute between shutdown and the next bootstrap
<jimbaker> SpamapS, that testing regimen is not going to work well until this cleanup occurs - it needs at least a couple of minutes in between for ec2 to properly cleanup
<SpamapS> jimbaker: but it has worked maybe 20 times so far
 * m_3 surprised
<jimbaker> SpamapS, i'm also surprised
<jimbaker> :)
<jimbaker> sounds like ec2 was especially fast in its cleanup
<SpamapS> I think the way I'll make it work is just to put each target in its own "environment"
<jimbaker> SpamapS, that makes much more sense
<m_3> good test of other things too :)
<SpamapS> I need to find a way to generate and dump an environments.yaml into the test chroots anyway. :p
<jimbaker> that will definitely isolate
<_mup_> ensemble/expose-cleanup r341 committed by jim.baker@canonical.com
<_mup_> PEP8 & PyFlakes
<lynxman> niemeyer: pong, sorry it was bank holiday here in the UK
<niemeyer> lynxman: Hey
<niemeyer> lynxman: No worries
<jimbaker> hazmat, i briefly looked at autodoc, but there seems to be some problem in our docstrings which is causing it to crash when i run  the sphinx build
<niemeyer> lynxman: Was just wondering about how Ensemble has been treating you.. I've heard a few claims that you guys were facing problems with us introducing new bugs daily, but it sounds like it wasn't really the case
<lynxman> niemeyer: we had some stability issues in formulas indeed, but as soon as that happens we either ask here or open bug reports
<niemeyer> lynxman: What kind of stability problem was that?  Just want to make sure I'm aware
<hazmat> jimbaker, any tracebacks?
<jimbaker> hazmat, sure, let me paste
<jimbaker> hazmat, actually hold on that - i was going against my expose-cleanup branch, but i didn't want it to get mired in a bigger problem, so i reverted my sphinx conf changes
<jimbaker> hazmat, just fairly simple - update conf.py, plus __init__.py (or use some third party gen scripts to do the walking)
<lynxman> niemeyer: in my case was just trying to make Ensemble work against openstack, but hazmat helped a lot on it so I didn't need to file any bugs
<jimbaker> hazmat, docutils was crashing in its statemachine.py code w/ a recursion limit issue, so something was indigestible
<niemeyer> lynxman: Ahh, ok
<lynxman> niemeyer: negronjl and iamfuzz are working on the cloudfoundry formulas which encountered more issues, which were raised on time
<niemeyer> lynxman: Well.. OpenStack support is a minefield at the moment
<SpamapS> so.. does ensemble make use of blueprints at all?
<jimbaker> hazmat, anyway, i will recreate in a bit
<hazmat> SpamapS, not really bcsaller1 and i where experimenting with them
<hazmat> but pretty much zero
<lynxman> niemeyer: it kinda is, but so far works pretty okay :)
<hazmat> niemeyer, its not that bad
<niemeyer> lynxman: That's good to know :-)
<SpamapS> I was thinking of a suite of tools to make ensemble more command-line-useful ... waiting for states, outputting data about machines/services that is not yaml.. and exit codes to differentiate permanent and transient errrors.
<lynxman> niemeyer: as said, hazmat helped a great deal
<hazmat> niemeyer, the ec2 compatibility issue they have marked as a bug, although its unclear when they'll fix it.. but just as far as impact to the ec2 provider, its like a dozen lines
<jimbaker> SpamapS, maybe "ensemble watch"
<niemeyer> hazmat: Yeah, I'm more concerned in conceptual terms than in terms of the particular issue
<hazmat> not including tests of course
<jimbaker> SpamapS, another thing that might be useful is to have a wait-for ensemble is ready command
<niemeyer> hazmat: It doesn't feel like there's serious interest in preserving the compatibility
<hazmat> niemeyer, so they will fix ec2 bugs, but not s3 is what i got out of it
<hazmat> niemeyer, agreed about the overall compatibility.. they have a layer, it has tests, they won't break it, but if something doesn't work, we'll probably need to make it do so
<jimbaker> SpamapS, there are now at least two places that do that last bit for the ec2 provider (via polling ec2_describe_instances); in particular ftests have this functionality
<SpamapS> jimbaker: right, there's really a swath of changes that need a binding purpose.. this is what blueprints are for
<jimbaker> m_3, i would assume this wait-for-ensemble command would be useful for you too
<hazmat> niemeyer, they have an ec2 compatibility bug for the security group work, that ensemble needs, its still marked as a bug as opposed to the s3 one.
<hazmat> https://bugs.launchpad.net/nova/+bug/829609
<_mup_> Bug #829609: EC2 compatibility describe security group returns erroneous value for group ip permissions <Ensemble:In Progress by hazmat> <OpenStack Compute (nova):Confirmed> < https://launchpad.net/bugs/829609 >
<SpamapS> jimbaker: actually I'd say that zookeeper is the system of record... so polling zk is the only sensible thing to do.
<jimbaker> SpamapS, also in ftests
<jimbaker> specifically a watch for our ZK nodes created by ensemble
<niemeyer> hazmat: Good to know about their feeling about that, thanks
<jimbaker> let me push what i have for that - not yet sufficiently robust, but at least it usually works
<hazmat> depending on the command probably an agent watch for unit or machine
<_mup_> ensemble/fix-functional-testing r334 committed by jim.baker@canonical.com
<_mup_> Better branch identification
<jimbaker> SpamapS, i don't like ensemble.ftests.test_ec2_provider.EC2MachineTest.assert_node yet, but it's on the right path
<jimbaker> note that we still need to poll ec2 first, then *watch* zk
<SpamapS> jimbaker: well my point is that this is something to plan at a larger scale than solving each of these issues one by one.
<SpamapS> jimbaker: there's a real use case for using ensemble in jenkins for instance.
<SpamapS> jimbaker: you only need to ask the provider for the seed so you can find zookeeper. After that you wait for zookeeper, then you listen to zookeeper.
<jimbaker> SpamapS, agreed with that approach
<m_3> jimbaker: yes, something that gives us that functionality
<SpamapS> ugh these test cases are really oddly organized
<hazmat> SpamapS, yeah.. you need an output capture from openstack effectively
<SpamapS> hazmat: for what I did, I just need to verify that the port is put back into the generated urls actually. :)
<hazmat> SpamapS, that sounds much nicer ;-)
<hazmat> fixing the security group parsing needs a response output capture to verify the implementation
<SpamapS> hazmat: right, that one at least is being discussed upstream as being fixed .. tho probably not in diablo. :(
<SpamapS> hrm.. txaws's test cases that use venusian fail when it is installed .. looks like they were written to a different venusian api
<SpamapS> http://paste.ubuntu.com/677409/
<_mup_> ensemble/trunk r335 committed by jim.baker@canonical.com
<_mup_> merge expose-cleanup [r=niemeyer,fwereade][f=824219]
<_mup_> For EC2: shutting down machines removes machine security groups;
<_mup_> destroying the environment removes the environment security group.
<niemeyer> jimbaker: Just double checking: have you tested it with a real interaction on _trunk_?
<jimbaker> niemeyer, i will double check it one more time - it has worked every time, but i will check as pushed
<niemeyer> jimbaker: Awesome, thanks a lot
<ahasenack> SpamapS: txaws needs venusian 1.0a1 at least, it was just released
<niemeyer> SpamapS: your opinion on this would also be welcome
<jimbaker> niemeyer, looks good as pushed with bouncing repeatedly (ensemble bootstrap && ensemble shutdown && ...), this was the pathological case of what the cleanup now fixes
<jimbaker> SpamapS, m_3 - but nothing like real usage. so please tell me if this doesn't fix the problem you've been seeing
<m_3> jimbaker: will do
<jimbaker> m_3, thanks!
<SpamapS> so, I just realized a big problem
<SpamapS> Ensemble by default pulls itself on deployed machines from the PPA
<SpamapS> This means that if ensemble is being driven from, say, mac ports.. at revision 295 .. it may deploy incompatible versions of ensemble onto the machines.
<SpamapS> I *think* have an answer for this.. but it may be a little bit weird...
<jimbaker> SpamapS, there's an outstanding bug about this that i started looking into
<jimbaker> bug 828147
<_mup_> Bug #828147: Ensemble branch option needs to allow for distro pkg, ppa, and source branch install <Ensemble:New for jimbaker> < https://launchpad.net/bugs/828147 >
<jimbaker> SpamapS, it's not quite what you described just now, but it does allow for precise specification of what it should be on the target environment
<SpamapS> jimbaker: it has to automatically choose itself though
<jimbaker> SpamapS, exactly, it would be nice for an automagic option in ensemble-origin
<SpamapS> I can't think of any time where I want to, by default, deploy something other than what I'm running.
<SpamapS> unless we guarantee backward compatibility
<jimbaker> SpamapS, we actually are planning to do just that for client access
<hazmat> jimbaker, we're not guaranteeing backward compatibility
<jimbaker> there's now a protocol version that's stored in zk (in the topology node to be precise); it's checked for compatibility on every operation working w/ topology (which is most of them)
<hazmat> we're just detecting a conflict
<jimbaker> hazmat, sorry i've reinterpreted the problem so it works like you said :)
<jimbaker> we only guarantee backwards compatibility if the protocol version is compatible
<jimbaker> this still allows for some wiggle room
<hazmat> SpamapS, the txaws tests run fine on natty fwiw
<SpamapS> hazmat: yeah the tests run fine for me w/o venusian
<SpamapS> on oneiric
 * ahasenack is invisible
<hazmat> SpamapS, ping me when you've got tests pushed, i'll merge to my updated my branch (~hazmat/txaws/fix-s3-port-and-bucket-op/) and push for review
<niemeyer> hazmat: Does that look good:
<niemeyer> c.Assert(meta.Provides["server"], Equals, formula.Relation{Interface: "mysql"})
<niemeyer> ?
<hazmat> niemeyer, that looks great
<niemeyer> hazmat: Awesome, I'll move forward with it then, thanks
<SpamapS> hazmat: I pushed over an hour ago
<hazmat> SpamapS, awesome, good to know, i'll merge and push the additional fixes for review, thanks
<OldSchool> does ensemble have to be used w/ EC2 or can you use it with, say, an in-house ESXi environment?
<OldSchool> ah, wait, found the FAQs finally
<hazmat> OldSchool, no esxi support at the moment, ensemble supports pluggable providers, and we're working on some additional ones at the moment (physical machine, openstack, local dev), but no plans atm for esxi 
<SpamapS> hazmat: no, thank *you* :)
<OldSchool> thank you
<hazmat> OldSchool, it shouldn't be too bad to add one, implementing a provider has gotten cleaned up quite a bit, and made easier, but its probably going to need community contribution.. if your interested folks around here be happy to help you out
<OldSchool> sweet, I just heard about it from a Canonical rep on Friday after doing some tinkering w/ Puppet and Landscape
<OldSchool> I only admin 4-5 Ubuntu boxes at work and they're pretty self-sufficient, I'm just always interested in automation, more curious for future projects
<hazmat> OldSchool, if you want to give ensemble a test drives, its probably to use ec2.. we'll be supporting a local development mode in the 11.10 release for ubuntu laptop/desktops which will also allow some experimentation and test driving
<hazmat> s/probably/probably best
<OldSchool> awesome
<OldSchool> I'll have to try out EC2
<OldSchool> thanks for your help, gotta run but I'm sure I'll be around
<SpamapS> argh, why does vim think 'description:' needs to be un-indented in yaml?
<_mup_> ensemble/go-formulas r9 committed by gustavo@niemeyer.net
<_mup_> Adding sample formula repository content for tests.
<_mup_> ensemble/go-formulas r10 committed by gustavo@niemeyer.net
<_mup_> Added support for pasing relations within the formula metadata.
<_mup_> This completes support for formula metadata parsing.
<_mup_> Bug #837027 was filed: Formula metadata parsing in Go must be completed <Ensemble:New> < https://launchpad.net/bugs/837027 >
<_mup_> ensemble/stack-crack r329 committed by kapil.thangavelu@canonical.com
<_mup_> be flexible about int vs. str status codes, remove error test relying on error result response parsing.
<_mup_> ensemble/stack-crack r330 committed by kapil.thangavelu@canonical.com
<_mup_> remove the ec2 key name usage, to verify its nesc.
<botchagalupe> ensemble: environments
<botchagalupe> environments:
<botchagalupe>   sample:
<botchagalupe>     type: ec2
<botchagalupe>     control-bucket: ensemble-9cb5c6eea8334c9780078fe63f6ccdb0
<botchagalupe>     admin-secret: 06ae08af8e0f4dd79180401bc28824b0
<botchagalupe> access-key: AKIAISCMJNNSSTGM4L2A
<botchagalupe> secret-key: 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
<botchagalupe> That does't work and this does...
<botchagalupe> ensemble: environments
<botchagalupe> environments:
<botchagalupe>   sample:
<botchagalupe>     type: ec2
<botchagalupe>     control-bucket: ensemble-9cb5c6eea8334c9780078fe63f6ccdb0
<botchagalupe>     admin-secret: 06ae08af8e0f4dd79180401bc28824b0
<botchagalupe>     access-key: AKIAISCMJNNSSTGM4L2A
<botchagalupe>     secret-key: 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
<botchagalupe> also any help on how to pre-seed a environments.yaml file before the bootstrap?
<niemeyer> botchagalupe: Hey
<niemeyer> botchagalupe: I'm not sure I get the pastes.. do you mean you'd like to have global access/secret keys?
<niemeyer> botchagalupe: The file format is yaml, so the indentation indeed matters
<niemeyer> botchagalupe: In terms of pre-seeding, you can just drop it in place and Ensemble will rely on the pre-existing one
<niemeyer> botchagalupe: pre-existing environments.yaml, that is
<niemeyer> botchagalupe: You can hand-pick all the values.. Ensemble doesn't really need anything special in there as long as you provide valid data
<botchagalupe> no if I indent the access-key in col 1 the bootstrap fails
<botchagalupe> ugly in col 1 but ugly not to support it...
<botchagalupe> my badâ¦ sorryâ¦ 
<botchagalupe> colâ¦ will it override the the bucket and oter key valuesâ¦ I will give it a try .. thanksâ¦ 
<niemeyer> botchagalupe: Yeah, the key is part of the environment, so it has to be indented with the environment fields itself
<niemeyer> botchagalupe: That said, you can also use environment variables for the keys
<niemeyer> botchagalupe: As standard for the ec2-* tools
<niemeyer> botchagalupe: and omit them from the env entirely
<niemeyer> botchagalupe: AWS_ACCESS_KEY_ID etc
<botchagalupe> Saw that in the doc.. I was trying to pre seed in a chef cookbook for quick setupâ¦  
<hazmat> botchagalupe, you can also set it in environment variables
<niemeyer> botchagalupe: Quite interesting
<hazmat> botchagalupe, ie. AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
<jcastro> it would be a nice convenience feature to spit out the link to the keys on AWS when you try to bootstrap and it doesn't find anything
<niemeyer> hazmat: I just said that :)
<hazmat> niemeyer, oh.. my proxy disconnected for a few minutes, i missed about 20m of the conversation it looks like
<niemeyer> jcastro: You mentioning the env variables as well?
<niemeyer> hazmat: Aha, ok :)
<hazmat> hmm.. having some problems on canonistack connectivity
<hazmat> looks like the s3server is hosed
<_mup_> ensemble/stack-crack r331 committed by kapil.thangavelu@canonical.com
<_mup_> verify txaws branch support
#ubuntu-ensemble 2011-08-30
<SpamapS> hazmat: quite common
<SpamapS> hazmat: it actually will come back.. you have to wait for whatever giant operation (probably an image upload) completes
 * SpamapS decides its time to call it a day.
<hazmat> bcsaller1, is the lxc-lib branch ready to go back into review?
<bcsaller1> hazmat: I think I can mark it again, yeah
<_mup_> ensemble/local-ubuntu-provider r331 committed by kapil.thangavelu@canonical.com
<_mup_> classify managed zk, add missing test package module
<_mup_> ensemble/local-ubuntu-provider r332 committed by kapil.thangavelu@canonical.com
<_mup_> refactor file storage out of dummy into common for lxc reuse
<_mup_> ensemble/local-ubuntu-provider r333 committed by kapil.thangavelu@canonical.com
<_mup_> switch lxc provider over to using providers.common.files
<_mup_> ensemble/local-ubuntu-provider r334 committed by kapil.thangavelu@canonical.com
<_mup_> local machine using /proc/uptime for launch time
<wrtp> niemeyer: hiya
<niemeyer> wrtp: Hey!
<fwereade> niemeyer: https://code.launchpad.net/~niemeyer/ensemble/go-final-formula-meta/+merge/73304
<fwereade> niemeyer: is the testrepo copied from the python one? and if so, why?
<niemeyer> fwereade: It is.. and I don't get the question
<fwereade> niemeyer: there seem to be redundant formulas
<fwereade> niemeyer: it feels to me as if either the go and the python ones are conceptually different -- in which case we don't need the extra formulas like mysql2
<fwereade> niemeyer: ...or fundamentally the same, in which case I feel nervous about the lack of any mechanism to keep them the same
<fwereade> niemeyer: does that make more sense?
<niemeyer> fwereade: It does, thanks
<niemeyer> fwereade: There's so such thing as "Go formula" and "Python formula"
<niemeyer> fwereade: There is "Ensemble formula" only
<fwereade> niemeyer: indeed
<niemeyer> fwereade: The repository there contains example Ensemble formulas for test purposes
<Daviey> What would be a good formula to be beta 1 QA Guinea pig?  Ideally expected to work on bare metal.. OpenStack formula is too complicated for the QA team to scratch for this.
<niemeyer> fwereade: We have two self-contained repositories.. one with a Python implementation, and one with a Go implementation
<fwereade> niemeyer: understood
<niemeyer> fwereade: I'm not sure about what's your proposal.. are you suggesting we should merge both repositories in a single one?
<niemeyer> Daviey: Hmm
<niemeyer> SpamapS: any suggestions?
<fwereade> niemeyer: ...maybe -- I'm not really proposing anything so much as talking in the hope that I can feel comfortable with a verdict on the review
<m_3> Daviey: anything will do.... mysql and mediawiki have the most history, but hadoop-master/slave are easy to install and more apropos to the openstack demo atm
<Daviey> m_3: Hmm.. mysql and mediawiki might be easier for someone that doesn't have prior experience with hadoop?
<niemeyer> fwereade: No worries really.. I'm just explaining the problem we have
<niemeyer> fwereade: and trying to investigate if you have anything in mind :)
<fwereade> niemeyer: I guess I would be happier if we had a common place for common stuff
<niemeyer> fwereade: What's the problem you have in mind?
<m_3> Daviey: sure... there're more moving parts with the mysql/mediawiki... hadoop has a little more of a mem requirement too
<Daviey> m_3: I'll present them as either/or then :)
<Daviey> thanks
<fwereade> niemeyer: just that someone will, sooner or later, change one but not the other, and I'm not sure what the worst possible consequences could be
<niemeyer> Daviey, m_3: Hadoop feels sexy, if nothing else :)
<m_3> Daviey: sure... hadoop's easy though... we have several blog posts about using it with ensemble in cloud.ubuntu.com
<m_3> certainly has all the buzz
<niemeyer> fwereade: That's a critical point we have to watch out for that indeed
<niemeyer> s/for that/for/
<Daviey> niemeyer / m_3: Heh.. Ok.. Someone might want to have a chat with hggdh about QA effort ongoing. :)
<niemeyer> fwereade: The test repository feels pretty minor, though
<niemeyer> Daviey: Superb!
<niemeyer> fwereade: This is a real issue across the board
<niemeyer> fwereade: We are effectively duplicating logic
<fwereade> niemeyer: true, but the test repo will be the second thing I approve which could drift out of sync, and I don't want that to become a habit ;)
<niemeyer> fwereade: I'm not sure the burden of separating out into different repositories pays off
<niemeyer> fwereade: Given that we'll still have the burden either way
<niemeyer> fwereade: Care from coders and reviewers is necessary
<niemeyer> fwereade: Heh..
<niemeyer> fwereade: Every single line in that source code can drift out of sync
<niemeyer> fwereade: The real solution for that is to finish the port.
<fwereade> niemeyer: heh, yes
<niemeyer> fwereade: After thinking some more, I also don't feel good about the suggestion of having a metadata format that is common for both
<niemeyer> fwereade: for syncing up the schema
<niemeyer> fwereade: Because that's _increasing_ the problem
<niemeyer> fwereade: You're basically pushing the problem to a different level
<niemeyer> fwereade: We'd have to develop a schema metadata format (a schema schema!)
<niemeyer> fwereade: and a parser, etc
<niemeyer> fwereade: and _that_ would have to stay in sync
<niemeyer> fwereade: Doesn't feel like it'd pay off
<fwereade> niemeyer: well, I never proposed that :)
<niemeyer> fwereade: I think you did?
<fwereade> niemeyer: not exactly
<fwereade> niemeyer: well, ok, I did :/ ...but I'm not convinced it actually makes the problem worse
<niemeyer> fwereade: "A language-agnostic schema format (where we define any given schema OAOO, and load it in both Go and Python) would solve the second problem."
<niemeyer> https://bugs.launchpad.net/ensemble/+bug/833906
<_mup_> Bug #833906: go and python schema implementations could drift out of sync <Ensemble:New> < https://launchpad.net/bugs/833906 >
<niemeyer> fwereade: That's a schema schema
<fwereade> niemeyer: hmm, I wrote it badly: I mentioned that, and went on to describe something similar but less heavyweight
<niemeyer> fwereade: The schema is in place.. formula metadata is in review
<niemeyer> fwereade: Very small amount of code
<fwereade> niemeyer: just a common way of specifying tests -- ie we keep schemas native and readable, but ensure we're running the same tests for each implementation
<niemeyer> fwereade: Diving into a significantly more complex implementation in *both* languages to solve a drift-away problem feels like going into the opposite direction
<fwereade> niemeyer: I don't believe that what we have is unworkable, otherwise I'd be making a lot more noise about it
<fwereade> niemeyer: but I'm fretting that the amount of things we need to remember to keep in sync will grow and grow
<niemeyer> fwereade: Again, the way to avoid that is to not have duplication
<niemeyer> fwereade: You seem to be worried about the metadata, but that's the simplest case
<niemeyer> fwereade: Just cp -a solves it..  the real problem is in logic
<niemeyer> fwereade: I'm also concerned for sure, but I'm more concerned about logic and thinking about approaches to solve that
<fwereade> niemeyer: I'm worried about the duplication, and the metadata is what set me off :)
<niemeyer> fwereade: That's what I just said I think
<niemeyer> fwereade: I want help on that.. how can we avoid duplication of _logic_
<fwereade> niemeyer: clearly we need a third language, which will generate both go and python for us
 * fwereade ducks
<niemeyer> LOL
<niemeyer> fwereade: The interesting thing is that this is exactly what the bug above is about ;-)
<fwereade> niemeyer: kind of, I don't think I've expressed myself very effectively there, I might have another go at it in a little while
<fwereade> niemeyer: anyway, thanks for the discussion, my nerves are soothed ;)
<niemeyer> fwereade: :-)
<niemeyer> fwereade: Mine are not.. I'm still concerned about duplication
<niemeyer> fwereade: I just don't see a big deal in test data.. this can easily be copied
<fwereade> niemeyer: I'm still worried about that, but I'm reassured that you are too
<niemeyer> fwereade: I've been thinking about potential ways to speed up a migration 
<niemeyer> fwereade: The real solution would really be to finish the port
<niemeyer> fwereade: We can have intermediate wins, though
<niemeyer> fwereade: and port bits in a way we can kill the other side
<niemeyer> fwereade: I have also been thinking about the possibility of _integrating_ both ports (!)
<niemeyer> fwereade: There may be a way to do it nicely
<niemeyer> fwereade: and I want to talk to you about that at some point.. I know you've made some very interesting work on that area before
<fwereade> niemeyer: hmm, that does sound interesting actually
<hazmat> gopython ?
<niemeyer> hazmat: Yep :)
<niemeyer> fwereade, hazmat: http://labix.org/lunatic-python
<fwereade> niemeyer: nice :D
<niemeyer> I'm still not entirely sure about it, since it might get so involved that simply porting things over could be easier/faster/simpler
<niemeyer> But, it's an idea..
<fwereade> niemeyer: > =python.eval("lua.eval('python.eval(\"lua.eval(\\'t\\')\")')")
<fwereade> niemeyer: you're clearly insane :)
<niemeyer> fwereade: Yeah, it's sick, I know :-)
<fwereade> niemeyer: (but in a good way ;))
<SpamapS> niemeyer: re suggestions for a formula.. I'm particularly fond of /usr/share/principia/tests/mediawiki.sh from principia-tools. :)
<niemeyer> SpamapS: Sounds good
<SpamapS> for bare metal that might be too many machines tho
<SpamapS> Daviey: re QA effort.. I've been writing automted tests for the wordpress/mysql example on EC2 and working on the mediawiki example as well. Maybe hggdb can use my scripts?
<SpamapS> hggdh even :-P
<Daviey> SpamapS: -> #ubuntu-testing ?
<SpamapS> Man.. I gotta /part some channels and close some query windows.. I'm at 106 open.
<Daviey> 106.. is that all? :)
<_mup_> Bug #837476 was filed: python2.6: /usr/lib/pymodules/python2.6/ensemble/providers/ec2/files.py:8: DeprecationWarning: the sha module is deprecated <Ensemble:New> < https://launchpad.net/bugs/837476 >
<_mup_> ensemble/local-ubuntu-provider r335 committed by kapil.thangavelu@canonical.com
<_mup_> pull ability to use release tarballs and src builds of zk from tests/common.py
<hazmat> SpamapS, interesting it doesn't do the same under 2.7
<SpamapS> hazmat: yep totally weird
<SpamapS> hazmat: further, python2.6 can't run the test suite.
<SpamapS> http://paste.ubuntu.com/678082/
<SpamapS> Tho I'm pretty sure thats my fault ;)
<hazmat> SpamapS, easy enough to fix though
<SpamapS> agreed
<niemeyer> Lunch time..
<_mup_> ensemble/local-ubuntu-provider r336 committed by kapil.thangavelu@canonical.com
<_mup_> switch test runner to using lib/zookeeper
<SpamapS> FAILED (failures=5, errors=8, successes=1417)
<SpamapS> Clearly we haven't been running the test suite against python 2.6
<m_3> SpamapS: we mentioned the other day freezing ensemble at something like 305 for oneiric?
<m_3> there're changes in 326 that/re critical for config.yaml
<SpamapS> 306 is the version in Oneiric
<m_3> can we make an exception?
<SpamapS> If it fixes a critical bug I can either cherry pick it in or go ahead and do a FFE
<m_3> either cherr
<m_3> gotcha
<m_3> it's pretty important... config.yaml defaults aren't picked up until then
<SpamapS> got a bug reference I can mark it as affecting the package (helps build the case for the FFE)
<m_3> so every formula that has a config.yaml requires you to pass a --config
<SpamapS> not that we have to build a massive case
<SpamapS> its universe and unseeded.. so there's really no reason for people to complain
<SpamapS> m_3: just give me the bug # :)
<m_3> ok, lemme find it... thanks!
<SpamapS> m_3: you can mark it as affecting the package too, I think.
<m_3> SpamapS: #828152
<_mup_> Bug #828152: default formula config values not available to hooks <Ensemble:Fix Released by bcsaller> < https://launchpad.net/bugs/828152 >
<SpamapS> m_3: yeah try clicking "Also affects distribution"
<m_3> I think I did
<m_3> ubuntu ensemble
<m_3> should I 'target to a milestone' too?
<SpamapS> m_3: cool thanks
<jcastro> jamespage: aha! So it wasn't exposing the port? :)
<jamespage> well it was broken as well - but should be more stable now upstream have released 1.0
<jcastro> ok so should I give it a shot now?
<jamespage> sure - it should work
<jamespage> just finished my testing
<jcastro> cool
<jcastro> want me to blog it?
<jcastro> I think people would use the heck out of this
<SpamapS> jamespage: the jenkins formula?
<SpamapS> I added an open-port call last week.
<jamespage> nope- the etherpad-lite one
<SpamapS> ahh cool. :)
<SpamapS> whats the official position on python 2.6 support?
<SpamapS> Should I file bugs for all the broken stuff? :-P
<niemeyer> SpamapS: https://blueprints.launchpad.net/ensemble/+spec/formula-store
<niemeyer> SpamapS, hazmat: Both of you have demonstrated interest in blueprints several times
<niemeyer> My feeling is that it'll increase the workflow burden without much benefit
<niemeyer> But I'm willing to try it out..
<SpamapS> I expect it will add a bit of management burden yes. The idea is to provide visibility from outside your team, so that dependent work can carry on without interruptions on either side.
<hazmat> niemeyer, afaics the main benefit is just tracking features instead of implementation, but that's not an obvious one to the person doing the implementation, because they intuitively know it.
<niemeyer> SpamapS: The kanban provides a lot of visibility already
<niemeyer> SpamapS: and it's not like the major features we're working on change every day
<hazmat> niemeyer, not at a feature level
<SpamapS> last I looked, there was no kanban for eureka.. but maybe I looked in the wrong place?
<niemeyer> SpamapS: The conversation we had at the sprint is still valid
<hazmat> SpamapS, link in channel topic
<SpamapS> ah
<jimbaker> http://people.canonical.com/~niemeyer/eureka.html
<SpamapS> Yeah this is good for right now.. it doesn't show anything that is on the "not now but later" list.. 
<SpamapS> and it doesn't speak to dependencies.. I can't see there that LXC work has to finish before multi-unit-per-machine
<niemeyer> SpamapS: You can find that here: https://bugs.launchpad.net/ensemble
<SpamapS> Let me step back. The reason BP's are good is just to group efforts that don't fit into one implementation piece like a bug.
<SpamapS> Its just an idea to help get over-arching change implemented.. not something I think is critical to the visibility of the project as a whole.
<SpamapS> The visibility problem that I see isn't real.. I think you're right that the kanban is "whats happening now" and the bugs list is "everything else"
<niemeyer> SpamapS: I understand.. we went through that before.  The point I continue to make is that there's a point in the gradient of documentation boilerplate where it's easier to talk to someone on IRC.
<hazmat> i've experimented with just using tags as easy way to track larger works.. all the security impl bugs are tagged 'security'
<SpamapS> yeah thats probably the lightest weight solution, and could even be visualized very easily
<niemeyer> hazmat: It feels a bit off to be using tags to track chunks of work like that
<niemeyer> hazmat: Tags should continue to be useful across the lifetime of the project
<niemeyer> hazmat: security seems to make sense
<niemeyer> hazmat: But e.g. "handle-ec2-firewall" is not a nice tag
<hazmat> but perhaps firewall does
<niemeyer> hazmat: Sure, but you're addressing a different problem
<SpamapS> Right, any defects in the firewall handling would be valid years later.
<niemeyer> hazmat: You may have firewall in orchestra and in zookeeper handling of security
<niemeyer> hazmat: It's orthogonal to the concept of features
<niemeyer> and so is security
<SpamapS> Likewise, if it were decided that disconnected operation is important.. one could identify the steps to do that as bugs.. and anything in the future that went against that, can also have said tag.
<SpamapS> I agree that its orthogonal to features..
<SpamapS> But all the over arching changes that I can think of are actually just wide spread defects .. ;)
<niemeyer> SpamapS: This is a good use for blueprints indeed
<SpamapS> I think it would be worthwhile to use BP's to group bugs and show the dependencies in long term goals.
<SpamapS> but... not worth more than coding.. so.. I thin I'll just stop producing more bytes of text for all of you to consume.. :)
 * SpamapS returns to testing like its 1999
<niemeyer> SpamapS: I agree, this sounds like a sane approach, as long as we agree on what the blueprints are before starting to dump things on them
<niemeyer> SpamapS: It's also different from the idea of blueprints I was against
<_mup_> ensemble/lib-lxc-merge r336 committed by kapil.thangavelu@canonical.com
<_mup_> merge lxc-lib
<_mup_> ensemble/lib-zookeeper r337 committed by kapil.thangavelu@canonical.com
<_mup_> extract managed zk into separate branch
<_mup_> ensemble/lib-zookeeper r338 committed by kapil.thangavelu@canonical.com
<_mup_> rename managed zk module to avoid name conflict
<_mup_> Bug #837601 was filed: a zookeeper class for reuse by both local provider and tests <local-dev> <Ensemble:New for hazmat> < https://launchpad.net/bugs/837601 >
<_mup_> ensemble/go-formulas r11 committed by gustavo@niemeyer.net
<_mup_> Reorganized files so meta handling has its own file/tests.
<hazmat> bcsaller1, got time for a catch-up?
<niemeyer> hazmat, bcsaller1: Quick interface review: http://paste.ubuntu.com/678276/
<bcsaller1> hazmat: yeah
<bcsaller1> niemeyer: still leaving out the regex stuff (validator), is that intended?
<niemeyer> bcsaller1: No, I was just going to figure it was missing down the road :)
<niemeyer> The title type is wrong as well, but otherwise the interface will look like this
<niemeyer> I guess default should be anything rather than string
<niemeyer> These issues will be sorted out once I actually implement it, though
<hazmat> niemeyer, agreed, else looks fine... is the goal to turn the config into a schema?
<hazmat> for validation?
<bcsaller1> hazmat: a schema is already used for that 
<niemeyer> hazmat: It is.. will work the same way as the Python side in that regard
<niemeyer> hazmat: Well.. maybe, I guess what you say may be interpreted in a different way
<niemeyer> hazmat: We have actual validator functions to validate the input against the config schema
<niemeyer> hazmat: Against the config, sorry
<niemeyer> hazmat: The config itself has a schema, though 
<hazmat> right, which this is reading, i was just curious if this was going to use a separate mechanism to validate values then the go schema work already done
<niemeyer> hazmat: So there are two levels.. config has a format and a schema to validate it.. the validated config *value* is used to validate the user input using validator functions
<niemeyer> hazmat: That's how it works in Python, at least..
<niemeyer> hazmat: What you say makes me curious though..
<niemeyer> hazmat: Maybe we can use the schema to validate the input too
<hazmat> niemeyer, indeed, but config is a schema, used to validate the user input
<niemeyer> hazmat: Right
<niemeyer> hazmat: It's not how it works today, but I'll see if that may simplify things as I write tests
<hazmat> cool
<_mup_> ensemble/go-formulas r12 committed by gustavo@niemeyer.net
<_mup_> Got config support skeleton in place, and a failing test.
<hazmat> bcsaller1, phone work for you?
<bcsaller1> its fine, I'm on skype too
<robbiew> bcsaller1: cool if we push our call back to the top of the hour?
<bcsaller1> robbiew: was on the phone, anytime is fine
<_mup_> ensemble/lib-files r339 committed by kapil.thangavelu@canonical.com
<_mup_> extraact file storange into providers.common for use by local and dummy providers.
<_mup_> Bug #837692 was filed: A common provider file storage for local and dummy providers <local-dev> <Ensemble:In Progress by hazmat> < https://launchpad.net/bugs/837692 >
<_mup_> ensemble/local-machine r340 committed by kapil.thangavelu@canonical.com
<_mup_> extract local machine into separate branch..
<niemeyer> >>> sio.truncate()
<niemeyer> >>> sio.getvalue()
<niemeyer> 'foo'
<niemeyer> This is a pretty weird behavior in StringIO
<niemeyer> Ah, I misunderstand the interface
<niemeyer> truncate(0) is what I'm looking for
<_mup_> ensemble/no-regex-option r336 committed by gustavo@niemeyer.net
<_mup_> Some consistency clean up in the config handling:
<_mup_> - Removed regex type. It's uneven with the other types (the value of a
<_mup_>   regex option is not a regex), and it's binding the generic formula
<_mup_>   definition to a specific language.
<_mup_> - Renamed 'str' type to 'string'. There's no value in saving the 3 chars
<_mup_>   when we have e.g. 'float' already in the same context, and 'string'
<_mup_>   is actually readable while 'str' is not.
<_mup_> - Introduced backward compatibility handling for 'str' so that it
<_mup_>   continues to work, but the user is warned about the fact that it's
<_mup_>   obsolete so that we can pester authors to rename it timely before it's
<_mup_>   too late and we can't go back. Tested this appropriately.
<_mup_> Bug #837708 was filed: 'str' in config.yaml should be renamed to 'string' <Ensemble:New> < https://launchpad.net/bugs/837708 >
<_mup_> Bug #837710 was filed: regex type should be dropped <Ensemble:In Progress by niemeyer> < https://launchpad.net/bugs/837710 >
<_mup_> ensemble/no-regex-option r337 committed by gustavo@niemeyer.net
<_mup_> Fixed examples.
<ameetp> Does anyone know if this tutorial is still valid?  https://ensemble.ubuntu.com/docs/user-tutorial.html
<ameetp> I see the instances in my AWS console, but the relation never occurs.  well at least from what I see (rather don't see) in the debug logs
<niemeyer> ameetp: It should be valid, yeah
<niemeyer> ameetp: Note that the debug-log has to be started upfront
<ameetp> niemeyer, yes, I started the logs in-line with the tutorial.  I don't see any relevant output
<niemeyer> ameetp: Cna you please paste it somewhere?
<ameetp> niemeyer, https://pastebin.canonical.com/51984/
<hazmat> ameetp, you started the debug-log before you launched the machines?
<niemeyer> ameetp: That's pretty strange.. it's a completely empty log
<hazmat> or before adding the relation?
<niemeyer> hazmat: Just asked that
<ameetp> niemeyer, I was hoping to at least see 'Machine:1: ensemble.agents.machine DEBUG: Downloading formula' when I ran 'ensemble deploy --repository=examples mysql'
<niemeyer> ameetp: Yeah, it's quite weird.. it should definitely be showing that
<niemeyer> ameetp: Unless you deploying and then started the command
<niemeyer> deployed
 * hazmat heads out to dinner, bbiab
<ameetp> niemeyer, no.  I started the command after doing 'ensemble bootstrap' in another console
<niemeyer> ameetp: Ok.. then you kept the command running, and did something else in a different terminal?
<ameetp> niemeyer, correct
<niemeyer> ameetp: What happens if you deploy mysql again?
<niemeyer> ameetp: Or add the relation?
<niemeyer> hazmat: Enjoy
<ameetp> niemeyer, https://pastebin.canonical.com/51985/
<niemeyer> ameetp: Use a different name..
<niemeyer> ensemble deploy --repository=examples mysql mysql2
<niemeyer> ameetp: Second name is the service name
<niemeyer> ameetp: It defaults to the same name of the formula, if you don't provide it
<ameetp> niemeyer, https://pastebin.canonical.com/51986/
<ameetp> i see an new instance in AWS console as well.  
<niemeyer> ameetp: The debug-log seems completely off somehow..
<niemeyer> ameetp: Can I access the bootstrap instance to have a look?
<ameetp> niemeyer, sure.  I can't give access to the system itself, but I can give you a file if you would like.  Or I can open a bug.  I went down this path because the relation never completes for me
<niemeyer> ameetp: What's the status?
<niemeyer> ameetp: You can open a bug, but without further information we won't be able to do much
<ameetp> niemeyer, state always remains null.  Even after 4 hours 
<niemeyer> ameetp: :-)
<niemeyer> ameetp: It's quite fast when things are working
<niemeyer> ameetp: Debug log is the way to check this out, but it feels like things are really hosed there
<niemeyer> ameetp: Sorry about that.. can't really do much without further debugging
<ameetp> niemeyer, is accessing the system the only way to debug?
<niemeyer> ameetp: Telepathy would be the other option, but I'm not very good with that ;-)
<ameetp> niemeyer, :)  Okay let me see if I can get someone to recreate this 
<ameetp> niemeyer, anyway.  Thanks for your assistance!
<niemeyer> ameetp: No problem, and sorry for the trouble
<_mup_> Bug #837724 was filed: Relation already exists error message is bad <Ensemble:New> < https://launchpad.net/bugs/837724 >
<niemeyer> ameetp: Btw, if you manage to run this again and reproduce, please ping me
<ameetp> niemeyer, sure
<_mup_> ensemble/go-formulas r13 committed by gustavo@niemeyer.net
<_mup_> Config parsing begins.
<niemeyer> Alright.. I'm stepping off
<niemeyer> May be back later for other activities
<niemeyer> Cheers everyone
 * niemeyer waves
#ubuntu-ensemble 2011-08-31
<_mup_> ensemble/fix-functional-testing r335 committed by jim.baker@canonical.com
<_mup_> More robust ftests
<_mup_> ensemble/fix-functional-testing r336 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<fwereade> actually don't worry :)
<niemeyer> fwereade: Hey!
<fwereade> niemeyer: heyhey!
<fwereade> niemeyer: how's life?
<niemeyer> fwereade: Awaken
<fwereade> niemeyer: sorry, did I miss something important?
<niemeyer> fwereade: Hmmm.. why do you ask?
<fwereade> niemeyer: ah, I wasn't sure whether to read "Awaken" as an imperative
<niemeyer> Erm..
<niemeyer> fwereade: I don't think I even know how to interpret that way ;-)
<fwereade> niemeyer: I'd see "awaken" as a verb and "awake" as an adjective
<fwereade> niemeyer: now I think of it, "awoken" would be a ...er, a participle? I can't actually remember much grammar
<niemeyer> fwereade: Ohhh.. got it
<fwereade> niemeyer: computer languages are easier ;)
<niemeyer> fwereade: Yeah.. see, it was my grammar being bad :)
<fwereade> niemeyer: no worries, I take things far too literally ;)
<fwereade> niemeyer: did we agree it would be ok to delete bin/ensemble-make-image and debian/ec2-build?
<niemeyer> fwereade: We did not.. it sounds fine to me, but we should check with hazmat first as he's created and used these the most
<fwereade> niemeyer: I'll assume they should stay for now then, all it costs me to keep them is a couple of global constants
<hazmat> fwereade, delete away
<fwereade> hazmat: ah, sweet :D
<hazmat> and g'morning to all
<fwereade> hazmat: and a good morning to you
<hazmat> i finally gave up on screen, and switched out to using tmux, its very nice
<niemeyer> hazmat: Morning, and welcome to tmux
<hazmat> jimbaker, i noticed that the unit test in py 2.7 has some very nice builtin layer support at module and class level
<hazmat> niemeyer, one tmux question i wondered about, and just curious if you knew, is it possible to reload the conf file for an existing session?
<hazmat> found it.. source-file ~/.tmux.conf
<hazmat> awesomeness
<niemeyer> hazmat: Hmm
<niemeyer> hazmat: Good question
<niemeyer> hazmat: Ah, you found it, cool
<niemeyer> hazmat: I haven't used that yet
<hazmat> niemeyer, makes testing new settings with tmux rather easy also find some nice tools from the tmux home page.. this one in particular fits my usage pretty nicely.. https://github.com/aziz/tmuxinator
<niemeyer> hazmat: True.. checking
<hazmat> also in the same vein but with more support for pane splitting https://github.com/remiprev/teamocil
<niemeyer> hazmat: Oh, neat
<niemeyer> hazmat: I create new tabs as I go
<hazmat> niemeyer, i normally do.. but crashes are frequent enough.. that having some auto-restore functionality helps me retain context, i'm trying to put in a cron job that uses ps-util to sniff all the process hierarchy for the windows and stores them in the yaml file, so upgrade instability isn't as bad..
<hazmat> it sort of like that software you mentioned a while back.. that has this awesome resume after crash system to take you back to exactly what you where doing b4 the crash  ;-)
<niemeyer> hazmat: Wow, so freezing a running tmux?
<niemeyer> hazmat: Yeah.. Cinelerra :-)
<hazmat> niemeyer, its not really freezing, its just recording the running programs to restart them.. i'm using an emacs per window, and the emacs is auto-saving its open files list, so on startup i'll get back  my open buffers... if the program didn't support resumption, then the context is loss..
<niemeyer> hazmat: I see.. quite neat even then
<fwereade> hey, hazmat, can I delete those 2 scripts in trunk as a trivial?
<hazmat> fwereade, sounds good
<fwereade> hazmat: cheers
<jimbaker> hazmat, thanks, i will take a look at that
<hazmat> jimbaker, yeah.. i'm not sure if its useful given how twisted is doing the test running and we need the reactor support
<hazmat> maybe
<jimbaker> hazmat, generally it should just mix together
<jimbaker> so twisted trial basically is managing the test runner, which is one component that can be customized in pyunit
 * jimbaker now has a much deeper appreciation for how trial works
<SpamapS> awesome.. I just managed to shutdown my jenkins environment, taking all the data with it...
<SpamapS> So, this is a good chance to answer the "how are we protecting peoples' data?" question. :)
<SpamapS> I did make an EBS snapshot of said data.. but there's basically no way to tell ensemble to re-gain that environment.
<niemeyer> Project's in Launchpad have name, display name, title, summary and description (!)
<niemeyer> Projects
<niemeyer> SpamapS: Indeed, which is why I'm advocating for a while that we call this command destroy-environment rather than shutdown
<_mup_> Bug #838215 was filed: "shutdown" must be renamed to "destroy-environment" <Ensemble:Confirmed for jimbaker> < https://launchpad.net/bugs/838215 >
<jimbaker> sounds like a good plan
<jimbaker> lengthy command, matches the internal api, no mistaking this is going to do something extreme
<hazmat> well i don't think the rename would have helped SpamapS 
<jimbaker> also parallels destroy-service
<hazmat> the rename sounds fine
<hazmat> addressing the underlying issue of volume management is probably a larger task
<hazmat> maybe ;-)
<jimbaker> well it's an interesting question of what would destroy that ;)
<jimbaker> maybe have a command that forces you to read what it asks. to destroy everything, please sum 10 + 32. like the gmail plugin to avoid inadvertent emails
<SpamapS> the rename would not have helped me, no
<SpamapS> but +1 for renaming it
<SpamapS> jimbaker: haha.. true, I did put 'echo y | ensemble shutdown' in my test scripts
<SpamapS> the difference is I was running my test script on my machine, instead of on the jenkins machine, so it shutdown my default environment
<SpamapS> which I had selected as the jenkins env because I was tired of adding -e jenkins
<SpamapS> Flat out..
<SpamapS> help me recover the data easily
<SpamapS> Right now what I have to do is start a new raw instance, mount the snapshot, and rsync it back to a new bootstrapped/deployed env
<jimbaker> SpamapS, yeah i figured as much. command histories, scripts, automation - all good, until they turn bad on oneself
<jimbaker> one good thing is that shutdown now tells you which environment you will be deleting. a small detail for sure
<SpamapS> Honestly, there's nothing good about this. :-P
<SpamapS> Thought...
<SpamapS> snapshot all volumes before shutdown.
<SpamapS> Unless you configure the environment not to.
<SpamapS> Luckily most of what I had done is captured in a bzr tree. :)
<_mup_> Bug #838238 was filed: No matter what version of ensemble you have on client, deployed instances get the one from the PPA <Ensemble:New> < https://launchpad.net/bugs/838238 >
<niemeyer> SpamapS, hazmat: I think the rename would have helped only in the sense that there's no return out of a destroy-environment
<niemeyer> Makes the outcome more obvious
<SpamapS> niemeyer: I intended to destroy the environment I had just created.
<SpamapS> niemeyer: the jenkins service running inside it was collateral damage.
<niemeyer> SpamapS: The jenkins was part of the environment..
<SpamapS> Because they happened to be using the same s3 control bucket and name.
<niemeyer> SpamapS: It's a bit like saying.. I wanted to format my disk, I just didn't want to lose that one file
<SpamapS> It picked up the .ensemble/environments.yaml from my home dir automatically.. accidentally.
<SpamapS> This is why I filed the bug request for a "freeze" or "lock" command.
<SpamapS> let me control changes to an environment so automated scripts can't screw it up
<niemeyer> SpamapS: We could change destroy-environment so it asks
<SpamapS> It does ask
<SpamapS> and I echo'd y in
<niemeyer> SpamapS: Well.. :-)
<SpamapS> because this was an automated script
<niemeyer> SpamapS: Sorry about that then :-)
<SpamapS> I'm not blaming ensemble, I'm suggesting that this is the first in a long line of "WTF!" questions that will arrive here when ensemble usage begins in earnest.
<niemeyer> SpamapS: I understand, and while I share your pain, I think it was an operator issue on that one case 
<niemeyer> SpamapS: rm -rf / does bad things too
<SpamapS> I agree 100%
<SpamapS> lol.. this is going to be fun
<SpamapS> I'm going to /part whenever this question comes up. :)
<SpamapS> I burn easily, the flames will be intense. ;)
<niemeyer> SpamapS: Do you have a recommended solution for the problem?
<SpamapS> Its so easy to roll out on ensemble.. it shouldn't be so easy to destroy it. I'd actually be in favor of making shutdown/destroy environment a disabled command until you mark an environment as "ephemeral" or something like that.
<SpamapS> But then that will happen in automation too.. hrm.
<niemeyer> SpamapS: Exactly what I was writing
<SpamapS> Ahh, maybe you have to --ephemeral at boot...
<SpamapS> bootstrap rather..
<SpamapS> And then the env name/controlbucket/etc are all generated.
<niemeyer> SpamapS: What happens if --ephemeral wasn't used?
<SpamapS> No shutdown
<SpamapS> no destroy env
<SpamapS> pull it apart one piece at a time
<niemeyer> SpamapS: Ugh..
<SpamapS> I'm shooting from the hip here.. haven't given it much thought..
<SpamapS> but it feels like we're playing just a little fast and loose with data when we could put a simple padlock on it.
<SpamapS> Maybe the padlock is to just make it clear that the default environment is not to be used in automation.
<niemeyer> SpamapS: I'm fine with adding yet another lock command, if you think that'd help
<niemeyer> SpamapS: Hmm
<niemeyer> SpamapS: There's another option.. disabling automation of destroy-environment..
<niemeyer> SpamapS: This sounds better, actually
<SpamapS> if not isatty .. say no
<niemeyer> SpamapS: Except if we want to test things in an automated way! :-)
<niemeyer> SpamapS: --i-am-sure-about-that-do-it-right-now-damn-it
<SpamapS> Yeah, mdadm had some options like that
<SpamapS> I'm just envisioning netflix admins running their test automation script and accidentally shutting down.. everything.
<SpamapS> It almost seems more sane to just get rid of it. Keep track of your services, destroy them one by one. Terminate the machines one by one..
<SpamapS> For my automated test script, I actually did have a shutdown equivilent for this very reason early on..
<niemeyer> SpamapS: We created this option because we need it, very foten
<niemeyer> often
<SpamapS> I kept track of all my machine ids and service names and destroyed/terminated them one by one.
<niemeyer> SpamapS: Making it painful at all times to protect the few that will automate the "yes I am sure" measure as you've done is not a good solution IMO
<hazmat> niemeyer, so regarding the bridge it look like libvirt does a nat to it by default, with some forward rules, those aren't compatible though with a network per environment.. only one nat active at a time
<niemeyer> hazmat: That sounds fine for now..
<niemeyer> hazmat: I wouldn't like to get in that situation just because we were not careful to namespace, but we don't have to waste lots of cycles on the problem
<hazmat> niemeyer, it also means that we're just going to be piggy backing on the libvirt default network.
<hazmat> i'm looking into the libvirt network options a bit more
<hazmat> i've got the network abstraction on libvirt done, so i'd like to keep the isolation if possible
<niemeyer> hazmat: Yeah, just wrapping what's there sounds very neat indeed
<hazmat> niemeyer, well if we're just using the default, there's not much point to wrapping, as we don't need to change anything
<hazmat> outside of namespacing the containers correctly
<hazmat> but effectively the network setup is a no-op
<niemeyer> hazmat: I meant just reusing what's there
<niemeyer> hazmat: As you described
<_mup_> ensemble/local-network r341 committed by kapil.thangavelu@canonical.com
<_mup_> a network abstraction for starting/stopping/defining libvirt networks.. not going to use but i wanted to capture it.
<SpamapS> sorry I disappeared.. my unity/wifi/oneiric melted down
<SpamapS> niemeyer: We're packing a self destruct into ensemble. Its useful, for sure. However, I wonder if we should just leave it out for non-developers. :-P
<niemeyer> SpamapS: For now I think we're good.. if you a) Have the proper AWS keys; b) Have the proper environments.yaml file; c) Type destroy-environment; d) Confirm with y...  I think it's fine to destroy it.
<niemeyer> SpamapS: I'll pay a beer to everyone who does that by mistake, so I owe you one
<SpamapS> I think you're ok, because I basically packed a grenade w/ a pulled pin in my script.
<SpamapS> I may actually have a better suggestion, which is to make it easier to "create" environments in an automated fashion... rather than having to type them into the yaml file.
<SpamapS> That way people won't be tempted to just re-use the same env over and over.
<niemeyer> SpamapS: yeah, stacks ftw
<SpamapS> well yes stacks.. but even more isolation.. just being able to say --config-dir=/tmp/ensemble.asdfy431k would make it so I can be more careful in my automated scripts not to use a static environments.yaml ever
<SpamapS> I mean ultimately the responsibility comes down on the automator to be careful. :)
<SpamapS> But if I have to touch files in the home dir for automation.. thats a dangerous game.
<niemeyer> SpamapS: We explicitly try to avoid that
<niemeyer> SpamapS: The information in a local admin's laptop should be only the necessary to reach the environment
<niemeyer> SpamapS: The authoritative version of the configuration should live in the environment itself, with HA etc
<niemeyer> SpamapS: So ultimately the issue is that there's too little in the local env yaml
<SpamapS> Ok, new problem then. I want to put safeties in my automated scripts.. something like.. if this env already exists, STOP.
<SpamapS> hm, does status give an error if its not bootstrapped?
 * SpamapS checks
<SpamapS> hm, just a generic "1"
<SpamapS> could be lots of reasons for that.. no way to be sure it means not bootstrapped other than grepping.. :P
<niemeyer> SpamapS: Agreed.. we should do better on rcs
<niemeyer> SpamapS: and also on helper commands
<niemeyer> SpamapS: is-bootrapped or whatever
<SpamapS> Well for now, I'll grep for that exact message.. hopefully that is enough to prevent another damnit moment.
<_mup_> ensemble/rename-shutdown-command r337 committed by jim.baker@canonical.com
<_mup_> Code changes
<niemeyer> SpamapS: Hmm..
<niemeyer> SpamapS: But bootstrap shouldn't really cause any issues
<niemeyer> SpamapS: If you do it again on a bootstrapped env, that is
<niemeyer> SpamapS: This would be a major problem
<niemeyer> SpamapS: Rather than grepping for logs that may change, I suggest just trying it again
<SpamapS> Right, my test script is basically  bootstrap, deploy deploy relate relate , shutdown ... 
<SpamapS> I want to make *SURE* that I am the one doing the bootstrapping, that it wasn't already done
<SpamapS> I'd prefer to be able to use a generated yaml file for this.. but I can't set the config dir, and I don't want to append yaml to the environments.yaml ..
<niemeyer> SpamapS: Just if [ $? -ne 0 ]; then FAIL fi; after bootstrap?
<SpamapS> I run with set -e
<SpamapS> existing env is not an error in bootstrap
<niemeyer> SpamapS: Oops.. that's certainly a bug
<hazmat> SpamapS, define $HOME in your test script if you want better isolation
<hazmat> SpamapS, yeah.. the return code from the ensemble cli are suspect
<hazmat> i filed a bug a while back regrading
<hazmat> actually jim did.. bug 697093
<_mup_> Bug #697093: Ensemble command should return nonzero status code for errors <cli> <Ensemble:New> < https://launchpad.net/bugs/697093 >
<_mup_> ensemble/rename-shutdown-command r338 committed by jim.baker@canonical.com
<_mup_> Doc changes
<_mup_> ensemble/rename-shutdown-command r339 committed by jim.baker@canonical.com
<_mup_> Add test for destroying a specified default env
<hazmat> i think i will keep the network abstraction, its still pretty useful
<hazmat> even against libvirt default for exposing attributes and starting if its not started
<_mup_> Bug #838330 was filed: Bootstrap command should error on existing <Ensemble:New> < https://launchpad.net/bugs/838330 >
<hazmat> bcsaller1, the other important parallel piece of work is doing the lxc integration with a serviceunitdeployment subclass
<hazmat> bcsaller1, wiring in the deployment class selection based on machine provider type later
<bcsaller1> hazmat: I'll look at it but I might not be able to start that part today, we'll see 
<hazmat> bcsaller1, fair enough.. i'm pushing through the last of the lxc local provider work, should have all the branches in review for tomorrow, i can start on it tomorrow
<hazmat> its the next step
<hazmat> bcsaller1, lost my irc connection for a moment
<bcsaller1> hazmat: didn't miss anything
<hazmat> bcsaller1, one think re lxc-lib, i'd like to create a container without running the customize script.. it doesn't seem like that's possible atm
<hazmat> s/think/thing
<hazmat> hmm
<hazmat> i'm trying to prime the deb-cache doing bootstrap so ops like add unit are fast.. but their async anyways.. so maybe that's just a bad assumption
<hazmat> yeah.. nevermind re lxc lib.. it makes more sense to just let the agent deal with the lag on the container creation
<smoser> ok... for ensembel cloud-config, you should probably for your sanity put something like:
<smoser> output:
<hazmat> smoser, output: ?
<smoser>  all: "| tee /var/log/cloud-init-output.log"
<hazmat> smoser, i thought cloud-init had log file output builtin now ?
<smoser> yes, it does log to /var/log/cloud-init.log
<smoser> but this will capture output of it
<smoser> and tee it
<hazmat> cool
<smoser> output:
<smoser>  all: [ "| tee /var/log/cloud-init-output.log", "&1" ]
<smoser> that would help me debug some
<smoser> i think maybe http://paste.ubuntu.com/679208/
<smoser> hmm... better with tee -a
<_mup_> ensemble/local-ubuntu-provider r341 committed by kapil.thangavelu@canonical.com
<_mup_> local provider skeleton
<smoser> ok. my first ensemble merge proposal!
<smoser> https://code.launchpad.net/~smoser/ensemble/cloud-init-output-log/+merge/73596
<niemeyer> smoser: Woohay!
<_mup_> ensemble/local-network r342 committed by kapil.thangavelu@canonical.com
<_mup_> expose network attributes from libvirt to local provider
<_mup_> ensemble/local-provider r343 committed by kapil.thangavelu@canonical.com
<_mup_> local provider that bootstraps and shutsdown
<SpamapS> smoser: hey, I was just reading cloud-init's code and I noticed that you're not passing -y to add-apt-repository
<SpamapS> smoser: I believe this may be causing the issue with getting stuck on console... http://paste.ubuntu.com/679240/
<_mup_> ensemble/managed-agent r343 committed by kapil.thangavelu@canonical.com
<_mup_> managed machine agent
<SpamapS> hrm.. bug 819329 is actually pretty tricky
<_mup_> Bug #819329: Tests depend on AWS_ACCESS_KEY_ID being set <Ensemble:Confirmed> <ensemble (Ubuntu):Confirmed> < https://launchpad.net/bugs/819329 >
<SpamapS> txaws is raising an error during a constructor because it can't find this environment variable...
<SpamapS> the conundrum is, there is no sane default..
<SpamapS> So do we set the env var in the tests? Do we shove it into the config during the tests, potentially bypassing code? Hrm.
<hazmat> SpamapS, its a unit test, you can shove any value there
<hazmat> SpamapS, it should never actually use it
<hazmat> SpamapS, it can also be fixed by ensuring its in the environment config of whatever test is using it
<SpamapS> so the real issue is that txaws needs to be mocked?
<SpamapS> since we're not "testing txaws"
<SpamapS> hazmat: if txaws isn't going to be mocked, then we need to test both having it and not having it in the env, so we know when that behavior changes.. 
<hazmat> SpamapS, we're using txaws but we don't ever interact externally during the unit tests
<hazmat> any external interaction is typically mocked
<SpamapS> hazmat: so the simple workaround is just to set them to dummy values before running the test suite. :-P
<bcsaller1> SpamapS: there is a change_environment function on all the test classes that will set env for the duration of the test
<SpamapS> mmk. So to work around the problem, we can certainly just dump AWS_ACCESS_KEY_ID in .. but shouldn't we also *verify* that if its not there, an error is raised?
<bcsaller> if the provider is ec2 for a given environment it should assert the values it depends on at runtime, yes
<SpamapS> Thats the frustrating/confusing part.. I don't see where ec2 was even asserted. :P
<bcsaller> SpamapS: that test is using write_sample_config which includes a default value of ec2, when it looks for the machine provider it triggers the issue you see. I agree, this is not clear
<SpamapS> hrm.. tests don't seem to run cleanly for me anyway.. :-P
<SpamapS> weird.. just running './test' should work pretty well, right?
<SpamapS> I'm getting *piles* of errors
<hazmat> SpamapS, assuming you've got txzk and zk yes
<SpamapS> if I try to run say, ensemble.environment tests.. I get these fails:
<SpamapS> http://paste.ubuntu.com/679281/
<SpamapS> Looks like it lost the HOME override
<SpamapS> ok n/m on that.. I misunderstood change_environment
#ubuntu-ensemble 2011-09-01
<niemeyer> Man.. the interaction between Launchpad pieces to create our workflow is never-ending
<hazmat> :-)
<niemeyer> Seriously.. bugs, bug tasks, milestones, projects, series, proposals, people, branches.. we use it all
<_mup_> ensemble/managed-agent r344 committed by kapil.thangavelu@canonical.com
<_mup_> managed machine agent
<SpamapS> niemeyer: OOHHH YEAHHH <FLEX>
<niemeyer> SpamapS: :)
<hazmat> niemeyer, like i said.. i found it easier to just sync everything local.. 
<hazmat> when working with any sort of report/ui on top of lp
<hazmat> and then factoring in the speed of the average rest call.. and becomes a quick win
<_mup_> Bug #838472 was filed: local-dev needs to manage/inspect the network <local-dev> <Ensemble:In Progress by hazmat> < https://launchpad.net/bugs/838472 >
<_mup_> Bug #838476 was filed: local-dev provider needs a provider machine <local-dev> <Ensemble:In Progress by hazmat> < https://launchpad.net/bugs/838476 >
<_mup_> ensemble/managed-agent r345 committed by kapil.thangavelu@canonical.com
<_mup_> verify agent/process controls can be called multiple times
<niemeyer> hazmat: It'd not help much I think.. even if you sync up everything locally, you still have all the linkage between the multitude of types
<hazmat> niemeyer, you can collapse the types into app oriented docs/objs when syncing
<hazmat> s/types/links
<niemeyer> hazmat: Sure, but you're still processing everything up
<niemeyer> hazmat: Almost done: http://goneat.org/pkg/launchpad.net/lpad/
<niemeyer> It _looks_ like I just need to sort out series now
<_mup_> Bug #838480 was filed: local-dev needs a way to manage a machine agent <Ensemble:New> < https://launchpad.net/bugs/838480 >
<niemeyer> And then "push-review" is done.. :)
<hazmat> niemeyer, nice
<hazmat> niemeyer, re lpad.. looking much more complete
<niemeyer> hazmat: Goes from branch to ready for review with no interaction
<niemeyer> hazmat: Following all of our conventions
<niemeyer> hazmat: Opens the preferred text editor for summary/desc
<hazmat> niemeyer, awesome.. bug report + lp ?
<hazmat> er mp
<niemeyer> hazmat: push, bug report, mp, link everything
<hazmat> niemeyer, great
<hazmat> niemeyer, can i get a copy i? :-)  i just did like 4 of those by hand
<niemeyer> hazmat: Oh, most definitely.. I'm doing that for the team
<hazmat> niemeyer, nice that you can group all the lp interactions in parallel as well, really cuts down on the command time i would guess
<niemeyer> hazmat: Exactly.. it does quite a few things in parallel
<niemeyer> hazmat: With nice reporting to stdout and all
<niemeyer> hazmat: It was also a good way to push the Launchpad API a bit forward.. I'll need it to continue on the store
<niemeyer> That was my day today..
<hazmat> niemeyer, yeah. that's great.. i know bcsaller was interested in using it, but needed alot of boilerplate classes to get going
<niemeyer> hazmat: There's still a ton missing because the API is so huge, but the foundation is more laid down now
<hazmat> niemeyer, good stuff.. i've done pretty well by local dev today.. i'm going to finish up some test for the provider, and move on to the service unit deployment with lxc tomorrow
<niemeyer> hazmat: Wow, neat
<hazmat> i'm very excited to start using it
<hazmat> it will actually totally justify me getting an ssd
<niemeyer> hazmat: I noticed a lot of things flying by today
<niemeyer> hazmat: Feels like a good wave :)
<niemeyer> hazmat: Haha :)
<niemeyer> hazmat: It's worth it :)
<hazmat> niemeyer, yeah... nice to get some interrupted time.. i've been neglecting the review queue though, i'll have a look at that tomorrow morning
<hazmat> i know both you and william have some branches there that need attention
<niemeyer> hazmat: Thanks, there are a few things we have to push forward there indeed
<_mup_> ensemble/local-provider r345 committed by kapil.thangavelu@canonical.com
<_mup_> wire in the managed machine agent and zk state initialization into provider bootstrap
<_mup_> ensemble/local-provider r346 committed by kapil.thangavelu@canonical.com
<_mup_> store zookeeper address into local provider storage, implement connect
<hazmat> i started having this problem w/ lxc
<hazmat>  ssh root@192.168.122.235
<hazmat> root@192.168.122.235's password: 
<hazmat> PTY allocation request failed on channel 0
<hazmat> bcsaller, you seen that?
<bcsaller> thats the issue with add-apt-repo asking for you to press enter I'd guess
<bcsaller> http://paste.ubuntu.com/679240/
<bcsaller> but not sure, if you change it off the ppa (where it doesn't add the repo) does it work?
<hazmat> bcsaller, we're using cloud init in the container now?
<hazmat> or is this something just inherent?
<bcsaller> I don't use it for anything 
<hazmat> these are just on oneiric containers that used to be working for me
<bcsaller> this isn't a cloud-init issue, its a change in python-softwareproperties
<bcsaller> add-apt-repo started asking for confirmation when its got a tty on stdin
<bcsaller> might be able to close stdin first and fake it out, not sure yet
<bcsaller> or echo y | add-apt-repo... 
<bcsaller> since it only checks that it gets a newline
<hazmat> ugh
<_mup_> Bug #838565 was filed: Testing push-review <Ensemble:New> < https://launchpad.net/bugs/838565 >
<_mup_> ensemble/local-provider r347 committed by kapil.thangavelu@canonical.com
<_mup_> lxc utility to get a python dict mapping container name to runtime boolean status
<_mup_> Bug #838566 was filed: push-review must be tested <Ensemble:In Progress by niemeyer> < https://launchpad.net/bugs/838566 >
<_mup_> ensemble/local-provider r348 committed by kapil.thangavelu@canonical.com
<_mup_> shutdown and destroy all containers when destroying environment.
<_mup_> Bug #838568 was filed: push-review must be tested again <Ensemble:In Progress by niemeyer> < https://launchpad.net/bugs/838568 >
<_mup_> ensemble/local-provider r349 committed by kapil.thangavelu@canonical.com
<_mup_> check required packages installed with python-apt
<niemeyer> Alright.. that was a loooong day..
<niemeyer> Have a good time everybody
<hazmat> niemeyer, indeed.. it was.. looking forward to using the new tool.. cheers
<niemeyer> hazmat: Cheers!
<_mup_> ensemble/local-unit-deploy r350 committed by kapil.thangavelu@canonical.com
<_mup_> unit deployment with containers
<smoser> SpamapS, bug 831505 should be fixed. we chose to redirect output to /dev/null rather than add '-y' as older versions of apt-add-repository dont know that flag.
<_mup_> Bug #831505: add-apt-repository blocks cloud-init on oneiric <cloud-init (Ubuntu):Fix Released> < https://launchpad.net/bugs/831505 >
<fwereade> is there any consensus solution (heh, I'd take any solution) to the problem of multiple methods with intentionally identical docstrings (eg MachineProviderBase.connect and ZookeeperConnect.run)?
<fwereade> or is this just another "the price is eternal vigilance" situation?
<hazmat> g'morning
<hazmat> fwereade, i was wondering about that
<hazmat> fwereade, i thought about using a decorator that pulled from the base class doc string
<hazmat> its more relevant with abstract base classes that document the protocol
<fwereade> hazmat: I had a similar thought, but it seemed a bit icky ;)
<fwereade> hazmat: true
<fwereade> hazmat: but then there are cases where the base specifies (say) :class:`ProviderMachine` and the base will want to say :class:`OrchestraMachine`
<hazmat> fwereade, i used to use zope.interfaces as a primary doc point, and let the implementation stand alone, with a see ``interface.method`` 
<fwereade> hazmat: I couldn't see anything that would be useful in enough cases to make it feel worth bothering
<fwereade> hazmat: mm, handy in the right context
<fwereade> hazmat: hey, is sphinx smart enough to connect things with unique names?
<fwereade> hazmat: so :exc:`ProviderInteractionError` rather than :exc:`ensemble.errors.ProviderInteractionError`?
<fwereade> hazmat: feels like a bit too much to expect, but if not I have to go back and fully qualify a bunch of stuff
<fwereade> ah well
<hazmat> fwereade, i believe it is
<hazmat> we need to start genering the docs from source to find out
<hazmat> jim mentioned he had an issue with it earlier
 * hazmat tries to find a place in sf to stay next week for the lxc sprint
<fwereade> hazmat: hm, I guess I should actually try generating them :/
<hazmat> niemeyer, morning
<niemeyer> hazmat: Morning!
<fwereade> morning niemeyer
<niemeyer> fwereade: yo!
<niemeyer> How's everyone doing this morning?
<niemeyer> Or rather, this X? :-)
<hazmat> fwereade, great reviews, thanks
<fwereade> not too bad :) and you?
<hazmat> niemeyer, i'm still recovering from a  late night, but early wake up
<fwereade> hazmat: cool, glad they're helpful -- I got a bit overwhelmed by the test failures so I kinda stopped half way through
<hazmat> fwereade, yeah those suck
<niemeyer> hazmat: Indeed! This must be pretty early for you given how late we were yesterday
<hazmat> fwereade, i'm looking at the control deploy failures now
<hazmat> very odd
<hazmat> KeyInteruption in the middle of a test without hitting a keyboard
<fwereade> hazmat: incidentally, should I WIP them when I mark them "needs fixing"? it seemed presumptuous, but on balance I think it's less helpful *not* to
<fwereade> hazmat: (if that makes sense)
<hazmat> fwereade, i think its appropriate when there's a test failure like this
<fwereade> hazmat: cool, cheers
<niemeyer> fwereade: Yeah, you have to take a decision there, of whether it'd be beneficial for the next reviewer to look at the changed branch, or if it doesn't make a difference
<hazmat> fwereade, it really depends imo, if the changes for the review are substantial enough that a second reviewer would be better to have a look after the changes
<fwereade> hazmat: sounds sensible
<hazmat> looks like there was something in the lxc-lib branch which has a delta to trunk regarding placement policies
<_mup_> ensemble/lib-zookeeper r340 committed by kapil.thangavelu@canonical.com
<_mup_> revert changes from lxc-lib to ensemble/control, resolves some deploy test failures
<jimbaker> hazmat, trial can raise a KeyboardInterrupt in _wait, see twisted.trial.unittest: http://paste.ubuntu.com/679811/
<hazmat> jimbaker, thanks, thats very useful to know
<fwereade> jimbaker: I've just been poking at sphinx docstring-grabbing; hazmat mentioned you might know of some problems
<jimbaker> fwereade, indeed i did experience problems in generating docs using autodoc
<jimbaker> fwereade, so did you just do the basic stuff of setting up autodoc through conf.py and linking in packages to be doc'ed?
<fwereade> jimbaker: basically yes
<fwereade> jimbaker: what was the problem?
<jimbaker> fwereade, when i did that, i ran into a recursion error in docutils
<jimbaker> fwereade, i assumed it was just bad some docstring that was breaking things and just needed to be isolated
<fwereade> jimbaker: heh, haven't hit that yet -- btw, have I missed some option that will cause it to find and document *all* the docstrings?
<jimbaker> fwereade, you expected that? ;)
<fwereade> jimbaker: kinda :p
<fwereade> jimbaker: the "auto" bit of the name "autodoc" got my hopes up a bit
<jimbaker> fwereade, in fact so did i. but the sphinx project apparently thinks this is not so useful. there is a third party script to do that which i found
<jimbaker> fwereade, to be honest, the mechanism they do have which is to require this be done in __init__.py is not so bad
<fwereade> jimbaker: I noticed something like that, and then I started thinking about stripping test docstrings, and then I started doing something else
<fwereade> jimbaker: indeed, quite sensible, may as well go with that then
<fwereade> jimbaker: and I guess I'll build after every change and hopefully spot the recursing thing that way
<fwereade> jimbaker: cheers :)
<jimbaker> fwereade, indeed, i think it should pop out quickly
<niemeyer> Holy crap.. our review queue is _amazing_
<niemeyer> I'll step out for lunch.. may be a few minutes late for our meeting
<robbiew> fyi, tweaked https://ensemble.ubuntu.com/ a bit last night....holler if there's an issue
<_mup_> ensemble/lib-files r341 committed by kapil.thangavelu@canonical.com
<_mup_> remove use of file_storage.path by some tests.
 * niemeyer is around
<niemeyer> robbiew: Ohh, video
<niemeyer> robbiew: Great stuff, thanks man
<robbiew> np
<niemeyer> bcsaller, jimbaker, fwereade, hazmat: Call time?
<fwereade> niemeyer: sounds goood
<niemeyer> robbiew: Wanna join?
<bcsaller> niemeyer: I'm around
<jcastro> g+? I wouldn't mind listening in if you guys have the room
<niemeyer> jcastro: We can put a few things away to accommodate you for sure
<jimbaker> niemeyer, cool
<robbiew> niemeyer: invite me...I'll join later, on a call
<niemeyer> jcastro: Not entirely sure if I did the right thing when inviting you.. I used your canonical addres
<niemeyer> s
<niemeyer> Weird..
<niemeyer> It broke down
<jcastro> odd, never seen that error before
<hazmat> just sent out a new invite
<hazmat> g+ hangouts have been flakey for me on occasion
<bcsaller> my camera isn't working, going to relog
<jimbaker> still waiting on a g+ hangout
<hazmat> these leaking temp files and dirs are a problem
<hazmat> seeing lots of them
<niemeyer> jimbaker: ping
<niemeyer> bcsaller, hazmat: I've renamed a couple of blueprints to remove the redundant "ensemble-" prefix.. feel free to change further please
<jimbaker> niemeyer, hi
<niemeyer> jimbaker: Sent some private comments
<SpamapS> http://paste.ubuntu.com/680072/
<SpamapS> ^^ this test fails for me sometimes.. but not all the time
<SpamapS> actually it fails every time
<hazmat> SpamapS, interesting
<hazmat> SpamapS, that's one of the tests that i found leaks files as well
<hazmat> actuall a temp directory
<_mup_> ensemble/ftests r1 committed by gustavo@niemeyer.net
<_mup_> Bootstraped simplistic functional test suite.
<niemeyer> jimbaker: https://launchpad.net/ensemble/ftests
<jimbaker> niemeyer, thanks
<niemeyer> jimbaker: mkdir ensemble-ftests; cd ensemble-ftests; bzr init-repo .; bzr branch lp:ensemble/ftests ftests
<niemeyer> jimbaker: Different line of development
<niemeyer> jimbaker: Branches, merge proposals, etc, all as usual..
<niemeyer> jimbaker: http://bazaar.launchpad.net/~ensemble/ensemble/ftests/files
<niemeyer> jimbaker: Please preserve the structure and spirit in there.. churn should continue to be a shell-script too
<niemeyer> jimbaker: Generate the html out of the output directory.. should be trivial
<jimbaker> niemeyer, ok, taking a look at the skeleton
<niemeyer> jimbaker: Leave environments.yaml and AWS credentials outside of the test suite itself, for now
<niemeyer> jimbaker: So that it works with any deployment method based on who sets it up
<jimbaker> niemeyer, sure
<niemeyer> jimbaker: Not sure if that'll be a good idea, but let's try it
<niemeyer> jimbaker: The tool to generate the html can be in Python or whatever you please, for convenience
<niemeyer> jimbaker: It'll also have to take into account different runs of the tool
<niemeyer> jimbaker: To produce the waterfall
<jimbaker> niemeyer, i'm certainly glad i don't have to write everything in bash
<niemeyer> jimbaker: :-)
<SpamapS> devops_borat.. "If you are still do manual deployment, you need of know in devops we call it 'hand job'"
<jimbaker> niemeyer, is it really necessary to have churn (what we discussed as the runner) be a shell script too?
<niemeyer> jimbaker: Yes, it is, because it'll ensure you keep it simple
<jimbaker> seems like a painful lesson in bash scripting, but ok
<SpamapS> Hmm, so.. working on bug 813112 so I can make the package builds fail on test failure..
<_mup_> Bug #813112: test suite cannot run without an ssh key <Ensemble:New> < https://launchpad.net/bugs/813112 >
<niemeyer> jimbaker: It's more than that.. it's a lesson in simplicity.  I don't expect you'll need much more than what's there to solve our problem.
<niemeyer> jimbaker: If anything at all.
<SpamapS> http://paste.ubuntu.com/680131/
<jimbaker> niemeyer, i'll see what it takes
<SpamapS> That fixes it.. 
<SpamapS> I suppose I should just throw it in the review queue.
 * SpamapS waves his hands... nothing to see here
<jimbaker> niemeyer, in particular, why are log/fatal defined in churn?
<niemeyer> jimbaker: I don't get what you mean.. the functions seem self-obvious to me?
<jimbaker> niemeyer, well why would you need them there? shouldn't it be the responsibility of the tests to produce output, as they do in the examples now?
<niemeyer> jimbaker: Again, I don't understand what you mean..
<niemeyer> jimbaker: I want to be able to type ./churn and have reasonable output presented to me
<niemeyer> jimbaker: log is an incredibly trivial function that simply presents a message with a timestamp in it.. 
<niemeyer> jimbaker: Why does that sound strange?
<jimbaker> niemeyer, never mind
<niemeyer> jimbaker: Done
<niemeyer> I'll get some coffee
<hazmat> niemeyer, is there a reason why the golang-weekly hasn't been updated in a month? does it depend on an upstream weekly release?
<hazmat> yeah.. it looks like no recent weeklies
<niemeyer> hazmat: I've been slacking
<niemeyer> hazmat: I was just updating it _today_ though
<niemeyer> hazmat: After lucio asked about it
<niemeyer> hazmat: tip has been updated.. I'll poke the weekly now
<hazmat> niemeyer, cool.. where does goinstall put files on disk for a go-lang package install?
<hazmat> got it /usr/lib/go/pkg
<niemeyer> hazmat: $GOPATH, if you set it as a user
<niemeyer> hazmat: Otherwise it'll try to put in $GOROOT
<hazmat> niemeyer, ic, just doing the package install
<hazmat> niemeyer, so do you manipulate GOPATH when you have multiple packages in the same source tree
<hazmat> niemeyer, i'm trying to run the formula tests, but it won't find the schema package for example
<niemeyer> hazmat: Hmm
<hazmat> do i need to goinstall the schema package? and then continually update it ?
<niemeyer> hazmat: I may have to tweak that to work with GOPATH
<niemeyer> hazmat: What I've been doing, and which should work, is to make install within the schema package
<niemeyer> hazmat: The detail is that I have a local tip branch 
<niemeyer> of go
<niemeyer> hazmat: This should work with GOPATH as well, though
<niemeyer> hazmat: But I'll have to fix it
<niemeyer> hazmat: The design was made in such a way that you can goinstall straight from Launchpad
<niemeyer> hazmat: goinstall launchad.net/ensemble/go/schema works
<niemeyer> hazmat: and so does goinstall launchad.net/ensemble/go/formula
<niemeyer> hazmat: the latter will automatically perform the former even
<hazmat> niemeyer, sure.. but if i'm testing a source tree which has multiple packages, then i'll need to constantly update those
<niemeyer> hazmat: There are just some minor inconveniences right now because $GOPATH is a recent thing and hasn't been introduced across the board in all tools, so I'll have to tweak the Makefile a bit
<hazmat> niemeyer, cool, it would definitely be a benefit to have packages tested together from the same tree, instead of having to version manage the installed against the src tree.
<niemeyer> hazmat: How do you mean?  As I just explained, there are some details right now, but the design is that you'll have to do nothing besides goinstalling
<niemeyer> hazmat: I think we misunderstand each other.. they are in the same tree
<hazmat> niemeyer, goinstalling puts files in into the $GOROOT/pkg tree.. but say i have two interdependent package changes in the same src tree, i want to run make test, and have it use the src tree for both packages, not manually update the $GOROOT/pkg beforehand
<niemeyer> hazmat: As I explained, you should use $GOPATH, not $GOROOT, and they'll all be in $GOPATH/src, in the same tree
<niemeyer> hazmat: goinstall will soon also figure changes in dependent packages automatically, and rebuild/reinstall
<niemeyer> hazmat: It's flaky right now because this area is being worked on, but the plan is pretty good
<niemeyer> hazmat: Soon we'll need no Makefile iether
<niemeyer> either
 * hazmat looks up gopath
<hazmat> interesting, sort of like an encapsulation of virtualenv into an environment variable
<niemeyer> hazmat: Yeah
<niemeyer> hazmat: If you want to test the code quick & fast without worrying about anything else, you can just install Go from tip with: hg clone https://go.googlecode.com/hg go; cd go; export GOROOT=$PWD; cd src; ./make.bash
<niemeyer> hazmat: This will mean GOROOT is writable
<niemeyer> hazmat: and it's why it works..
<niemeyer> hazmat: I'll sort out the Makefiles in the future, though, so that GOPATH is taken care of in the Makefiles
<niemeyer> hazmat: Sorry for the trouble there
<hazmat> niemeyer, no worries, thanks for the info, i'll see if i can run the tests with that.. the remote version of launchpad.net/ensemble/go/schema is more recent than the one in the branch, so it gets undefined symbols for the renames
<niemeyer> hazmat: Hmm
<niemeyer> hazmat: Have you noticed that schema and formula are in the same branch?
<hazmat> niemeyer, i have
<hazmat> niemeyer, how do you install a src package with goinstall
<niemeyer> hazmat: Have I screwed up something then?  Any branch should be self-consistent
<hazmat> afaics, i have to rename the directories into import paths to use GOPATH
<niemeyer> hazmat: goinstall "server.com/import/path"
<niemeyer> hazmat: Why?
<hazmat> niemeyer, that installs the remote version
<niemeyer> hazmat: Unless you have a local version
<hazmat> niemeyer, the remote version is incompatible with one of the packages in the branch, it needs to compile against the other package in the branch
<hazmat> niemeyer, i'm just running make test from the formulas directory
<hazmat> niemeyer, if there's a way to install the local version of the package that would work
<niemeyer> hazmat: I don't understand.. the remote version works with the remote version.. branches work with the content of the branch
<niemeyer> hazmat: It's like Python
<niemeyer> hazmat: If you have a branch, ensemble.provider works with ensemble.state of the same branch
<niemeyer> hazmat: There's no difference there
<hazmat> niemeyer, is there some path variable you have to define for that? .. i'm trying to run 'make test' in the formula directory
<niemeyer> hazmat: No.. as I explained, what I do is to: cd schema; make install; cd ../formula; gotest
<hazmat> niemeyer thanks, that's what i needed
<niemeyer> hazmat: http://paste.ubuntu.com/680159/
<niemeyer> hazmat: And again, this is still pretty clumsy compared to what we'll have shortly..
<niemeyer> hazmat: Once $GOPATH and gomake come fully to life, it'll be brainless..
<hazmat> niemeyer, indeed that will be much easier
<niemeyer> hazmat: Just gomake inside schema will do everything
<hazmat> niemeyer, http://paste.ubuntu.com/680163/
<niemeyer> hazmat: Define GOROOT
<hazmat> k
<niemeyer> hazmat: export GOROOT=/usr/lib/go
<niemeyer> hazmat: If you're using the package
<niemeyer> gomake would do that for you already, by the way
<niemeyer> But since you'll be playing with it often, it's worth putting it in the profile
<hazmat> niemeyer, i've got the standard go variables defined pointing at the system path
<hazmat> s
<hazmat> /usr/lib/go for goroot /usr/bin for gobin
<niemeyer> Cool
<hazmat> i've got  a previous install of the newer remote package version in goroot
<hazmat> which is causing the problem
<niemeyer> hazmat: But it was undefined in that paste, right?
<hazmat> niemeyer, oh.. i assumed the newer version has the renames already in it.. it would have a different error regarding the package import if it wasn't installed
<hazmat> if i remove the pkg from $GOHOME, and do the cd schema; make install; cd ../formula; gotest it fails with http://paste.ubuntu.com/680165/
<niemeyer> hazmat: DEFINE $GOROOT!
<niemeyer> :-)
<niemeyer> Makefile:1: /src/Make.inc: No such file or directory
<niemeyer> There's a $GOROOT variable before the /src/ in that line.. which is empty!
<hazmat> hmm.. but its defined.. make test in the schema works fine
<niemeyer> It's not defined, no
<niemeyer> export GOROOT=/usr/lib/go
<niemeyer> Or whatever it is
<hazmat> http://paste.ubuntu.com/680169/
<hazmat> yeah
<niemeyer> hazmat: It's not defined..
<niemeyer> hazmat: That's the error you're getting
<niemeyer> hazmat: This is the first line in the Makefile:
<hazmat> niemeyer, that's env | grep GO in the shell. it is defined
<niemeyer> include $(GOROOT)/src/Make.inc
<hazmat> ah
<niemeyer> hazmat: Works now?
<niemeyer> hazmat: Can't find a way to link blueprints and bugs in the API :-(
<hazmat> niemeyer, nope.
<niemeyer> hazmat: What's wrong now?
<hazmat> niemeyer, same problems, if i install it complains about undefined schema.M, if i uninstall it.. it complains about it not being found
<niemeyer> hazmat: Can you please paste the error?
<hazmat> niemeyer, it was the sudo.. killing the env  variable i think
<hazmat> nope.. still an error
<niemeyer> hazmat: Same error?
<hazmat> http://paste.ubuntu.com/680178/
<niemeyer> hazmat: It's the same error.. GOROOT not defined
<niemeyer> hazmat: Use gomake.. it'll export the variable fo ryou
<hazmat> niemeyer, sweet! that works
<niemeyer> hazmat: Woohay
 * hazmat grabs some dinner
<hazmat> niemeyer, yeah i don't see any way to attach a bug to the blueprint/spec via the api
<hazmat> bummer
<niemeyer> hazmat: Sadness.. let's see if the LP folks can give us a hand
#ubuntu-ensemble 2011-09-02
<jimbaker> niemeyer, any preferences in terms of python tools for writing html? cheetah?
<niemeyer> jimbaker: I would prefer something very simple
<jimbaker> niemeyer, exactly. so please tell me what you prefer, and i will use it :)
<niemeyer> jimbaker: If possible a single Python file we can embed in the project itself
<niemeyer> jimbaker: Since our needs are pretty trivial.. just iterating over a list building a table with OK/FAILED + links
<niemeyer> jimbaker: Do you know anything like that?
<niemeyer> I'll get some food, biab
<jimbaker> niemeyer, i don't know of an ultra simplistic tool. typically when it's framed like that, it tends to be done with just direct writes
<SpamapS> OK/FAILED links? isn't that what jenkins does?
<jimbaker> niemeyer, but in my experience that rarely works well. best to use a tool chain that ensures compliant output. i haven't used cheetah, but i know it's used by other people on the ubuntu server team
<jimbaker> SpamapS, correct, we are re-implementing what jenkins does
<jimbaker> SpamapS, regardless this is the task i have
<SpamapS> eh?
<jimbaker> for our functional testing
<SpamapS> Right, subunit doesn't have something ?
<SpamapS> I mean, jenkins is like, the industry standard.
<SpamapS> And we have a.. really kick ass jenkins formula. :)
<jimbaker> SpamapS, agreed that jenkins is the industry standard
<SpamapS> https://wiki.jenkins-ci.org/display/JENKINS/NUnit+Plugin
<SpamapS> I was kind of hoping we'd get to a point where we were graphing all that lovely test coverage
<jimbaker> SpamapS, yes, output to standard nunitxml is pretty trivial. subunit supports this, through subunit2junitxml. however, the functional testing will not be done w/ trial
<SpamapS> aww snap
<jimbaker> SpamapS, this is actually a good thing. trial is pretty wretched for long running functional tests
<jimbaker> SpamapS, niemeyer has written a new test runner that will execute shell scripts (this is the churn we discussed earlier on this channel)
<SpamapS> Awright. Well, I don't really see us matching jenkins for frontend capabilities.. probably worth your time to produce nunint/junit xml and then just chug through it on jenkins.
<jimbaker> SpamapS, however, it might make sense to output junitxml instead. perhaps that could be considered at a later time.
<SpamapS> A new test runner that runs shell scripts?
<SpamapS> Thats also what jenkins does. :)
<SpamapS> Tho its not much of a tool for developers to run in an iterative sense..
<SpamapS> I didn't think the functional tests would be somethign you'd do in that manner.
<SpamapS> Figured you'd just set jenkins jobs up to point at in progress branches and trunk and run tests on commit
<jimbaker> SpamapS, that makes a lot of sense to do. again the test runner could readily output junitxml
<jimbaker> SpamapS, the more important thing is that we want functional tests directly exercising ensemble commands
<SpamapS> Hey, me too. :)
<SpamapS> On all supported configurations
<SpamapS> (that last bit is integration testing.. but .. details.. ;)
<jimbaker> SpamapS, cool. so consider this as an intermediate step
<jimbaker> trust me, it will be very easy to have this integrated into jenkins in some subsequent step
<SpamapS> Alright, well yeah, I'd think the time spent outputting HTML would be better spent outputting junitxml that jenkins can read and digest.
<SpamapS> On the tests..
<SpamapS> I wrote a few silly bash scripts that deploy/relate/query the formulas.. and james page came up with a cool way to have formulas embed tests ...
<SpamapS> we should have a pow-wow once you've got the test runner doing some stuff.
<jimbaker> SpamapS, sounds like a plan. i definitely want to hear more about james' ideas
<jimbaker> SpamapS, we could readily add your bash scripts
<SpamapS> jimbaker: I need to distill them down to smaller nuggets that can be encapsulated and run wherever.. one challenge is setting up the env w/ an environments.yaml and ssh key and such
<jimbaker> SpamapS, yeah, that's a much more interesting problem. one thing that we can do w/ building out environments.yamls is do things like running several envs in parallel
<jimbaker> SpamapS, i do have a variant of sshclient that ignores possible man-in-the-middle, but it would be much better if we managed this properly
<jimbaker> SpamapS, another thing i did play with was making sshclient much more robust for extremely long waits. at the very least, we might want to have something like a wait-for-ensemble ENVIRONMENT which successively looks at the milestones in an env coming up
<jimbaker> (it might also be a useful ensemble subcommand in the future, don't know)
<SpamapS> Not sure what you mean
<SpamapS> I wrote a wait4state perl script that just runs status
<niemeyer> jimbaker: This looks nice: https://github.com/defunkt/pystache
<SpamapS> but /win 10
<SpamapS> doh
<jimbaker> niemeyer, looks good to me, thanks!
<jimbaker> SpamapS, but /win...  ?
<SpamapS> irssi
<SpamapS> hey quick python question
<SpamapS> do lists keep an internal length?
<niemeyer> jimbaker: It's an incarnation of this: http://mustache.github.com/
<SpamapS> I'm profiling something that does len(giant_list) a lit and it seems to me that removing these calls makes it a lot faster (and just keeping track of the length in a different variable)
<niemeyer> SpamapS: They do
<niemeyer> SpamapS: Are you doing tons of those calls?
<SpamapS> yes
<niemeyer> SpamapS: function calls in Python are expensive in general.. you may just be seeing that cost
<jimbaker> niemeyer, yeah, mustache is a fine approach
<SpamapS> 5 for every line of a 500,000 line file....
<SpamapS> So I'm not crazy.. its not a massive expense per call.. but it adds up
<jimbaker> SpamapS, doesn't seem to be so much even for cpython
<niemeyer> SpamapS: Yeah, definitely.. you'd notice 2.5M Python function calls for sure
<SpamapS> http://paste.ubuntu.com/680217/
<jimbaker> so there, we have two contrasting viewpoints
<SpamapS> now to find out where the other 8.8s are spent. :-P
<niemeyer> jimbaker: You mean you don't find them expensive?
<jimbaker> niemeyer, i just don't think it's that bad if it's just 2.5M
<niemeyer> jimbaker:
<jimbaker> niemeyer, functional call overhead in cpython is obviously quite expensive, for a variety of reasons
<niemeyer> def f(): pass
<niemeyer> start = time()
<niemeyer> for i in range(2500000): f()
<niemeyer> print time()-start
<niemeyer> jimbaker: This takes 1 second in my machine
<niemeyer> jimbaker: Doing absolutely nothing
<niemeyer> jimbaker: There's more than a function call there, but gives an idea
<jimbaker> niemeyer, i'm sure that's the case :)
<niemeyer> jimbaker: That's what I meant by "you'd notice"
<jimbaker> it's just that 0.5s as the case w/ SpamapS's perf or 1s for your case is still a relatively small number compared to what i have seen in the past, that's all ;)
<jimbaker> but if run in a command, or in the direct path for rendering a page, sure, it's extremely noticeable
<SpamapS> That, and things like .append, .sort, etc, are the only function calls..
<SpamapS> this script lags the same algorithm in perl by about 15%
<SpamapS> I was wrong btw, it doesn't do 5 for every line
<SpamapS> just 1 len for every line
<SpamapS> but yeah, 500,000 is 0.504s
<jimbaker> SpamapS, did you try it w/ pypy? that would be an interesting exercise
<SpamapS> jimbaker: I wanted to but didn't know how to even get pypy ;)
<jimbaker> functional call overhead w/ pypy is extremely reduced
<niemeyer> Yeah, should kick ass
<SpamapS> lol.. lucid, karmic, and jaunty.. :-P
<jimbaker> http://pypy.org/download.html - just give it a try, i don't know where they are on latency
<niemeyer> Takes 16ms in Go..
<niemeyer> The 2.5
<jimbaker> niemeyer, that might be close to pypy for the empty case, factoring out startup. which may or may not be applicable, given the domain
<jimbaker> (i too should give it a try ;) )
<niemeyer> Maybe.. haven't tried that kind of thing there yet.. would be curious, though
<niemeyer> Given their latest work it must be pretty amazing
<jimbaker> bbl
<SpamapS> pretty hard to find pypy binaries.. :-P
<_mup_> Bug #838568 was filed: push-review must be tested again <Ensemble:In Progress by niemeyer> < https://launchpad.net/bugs/838568 >
<niemeyer> Alow!
<hazmat> jimbaker`, jinja2 is pretty nice re templating engines
<_mup_> ensemble/local-provider r353 committed by kapil.thangavelu@canonical.com
<_mup_> remove utility wait_for_node, just make the nesc. method public on zookeeperconnect, allow get open port to take a host option.
<jimbaker`> hazmat, jinja2 is fine. but let's go with mustache
<hazmat> jimbaker`, wasn't clear that mustache has any advantage over say.. string.Template
<hazmat> er.. actually string formatters
<hazmat> mustache looks fine though
<jimbaker`> hazmat, mustache does one very important thing, which is that it performs html encoding
<jimbaker`> when compared to string.Template. obviously the mustache {{}} itself is familiar in both templating systems
<niemeyer> jimbaker`: and loops, I guess
<niemeyer> Lunch time.. biab!
<SpamapS> niemeyer: can we get push-review from a PPA or something?
<SpamapS> niemeyer: I *really* like the automation it promises. :)
<_mup_> Bug #839695 was filed: ensemble.formula.tests.test_bundle.BundleTest.test_executable_extraction fails on empty home dir <Ensemble:New> < https://launchpad.net/bugs/839695 >
<SpamapS> hazmat: re the bug linked above..
<SpamapS> hazmat: its checking for 493 as the stat.. but.. in my bzr co of ensemble, its 509 .. I believe this is because umask has changed on ubuntu..
<SpamapS> but I could be wrong
<SpamapS> indeed.. its just a matter of how you checked out the branch
<SpamapS> I think the appropriate check is that the stat *matches* the sample.. not that it is exactly 509.. basically we want to make sure exec was preserved
<hazmat> SpamapS, sounds good
<SpamapS> hazmat: http://pastebin.com/DaaJ1Mp0
<SpamapS> that is currently building in my PPA.. if it passes all tests, I'll change debian/rules to fail the build on test fail and submit a FFe for revision 336 to upload to Ubuntu.
<SpamapS> (with the 3 patches I've submitted for review applied.. ;)
<niemeyer> SpamapS: Absolutely! Working on that already.. should be apt-gettable today
<jcastro> hey m_3 
<jcastro> http://blog.carbonfive.com/2011/09/01/deploying-node-js-on-amazon-ec2/
<_mup_> Bug #839794 was filed: Final test for the tool! <Ensemble:In Progress by niemeyer> < https://launchpad.net/bugs/839794 >
<jcastro> would now be a good time to ask how the node standalone formula is coming along?
<jimbaker`> jcastro, i like how that blog post begins: "After nearly a month of beating my head against the wall that is hosted node.js stacks â with their fake beta invites and non-existent support..."
<m_3> jcastro: hey
<m_3> jcastro: yeah, the node formula was part of a talk I gave on IRC a few weeks ago
<m_3> jcastro: I'll brush it off, promote it to principia, and blog about it
<jcastro> m_3: do you have time to post a response to the guy? 
<jcastro> it reads to me like he needs ensemble hard core
<m_3> jcastro: yeah, I'll do it during a boring talk this afternoon
<jcastro> oh sorry, forgot you were travelling
<m_3> jcastro: conf wraps up today, so I'll be able to put time on it over the weekend
<negronjl> jcastro: ping
<jcastro> negronjl: pong
<negronjl> jcastro:  in a meeting right now.. I'll ping you later
 * jcastro nods
<_mup_> ensemble/local-unit-deploy r354 committed by kapil.thangavelu@canonical.com
<_mup_> factor out unit environment retrieval into separate method.
<_mup_> ensemble/local-unit-deploy r355 committed by kapil.thangavelu@canonical.com
<_mup_> user/environment qualified lxc container name, tweak upstart job
<_mup_> ensemble/local-unit-deploy r356 committed by kapil.thangavelu@canonical.com
<_mup_> additional test for qualified container name
<hazmat> SpamapS, does upstart do env expansion  on the exec stanza?
<SpamapS> hazmat: yes
<niemeyer> Okay.. lpad is out, lbox is out.. back to reviews and porting formulas 
<niemeyer> But I'll step out for now.. may do some extra time tomorrow for helping on the queue
 * niemeyer bbl
<_mup_> ensemble/local-unit-deploy r357 committed by kapil.thangavelu@canonical.com
<_mup_> have upstart respawn and run the container in the foreground, doc strings for all container deployment methods.
 * robbiew needs a drink
<hazmat> excellent idea
<SpamapS> hazmat: upstart for containers, but not for the agents?!
<hazmat> SpamapS, for the agent
<hazmat> unit agent only
<hazmat> inside a container
<hazmat> its not truly restartable.. probably shouldnt' be respawn auto
 * hazmat returns with a bloody mary
<hazmat> SpamapS, one nice fallout of the local dev stuff, it should be pretty easy to use this for arm servers, with an external zk address.
<SpamapS> task tsk.. we need our idempotency. :)
<SpamapS> :)
 * hazmat reruns tests to ascertain potency
<SpamapS> oh I thought you were saying that its not re-startable because of some problem with idempotency
<hazmat> SpamapS, the real issue is loss of events while the agent was down,  in practice its probably not an issue, except when it is
<SpamapS> err.. events?
<hazmat> no good reason not to go ahead with it i suppose
<SpamapS> I thought we just watched to see what the state was
<SpamapS> if we miss 4 changes, who cares, just make the new state true.
<hazmat> SpamapS, if the agent is down, watches won't be delivered
<hazmat> and extant watches need to be restablished as new watches potentially
<SpamapS> if we need that, we need a queue, not a data store
<SpamapS> But I see what you're saying
<hazmat> we're delivering a queue/event interface to formulas, but disconnections mean establishing a delta between state on reconnect to inform the formula of the events between the watch restablishment
<SpamapS> what we need is to keep track of what state the agent thinks it is in, so when it comes back up it can do the appropriate transitions.
<SpamapS> Thats not a queue.. thats a watched data store.
<hazmat> yup
<SpamapS> Convenient, yes, but not a queue. :)
<hazmat> its really a sync between a remote and local data store
<hazmat> to compute the delta
<SpamapS> I see the problem.. very interesting
<hazmat> in truth its probably a very rare issue in practice for a respawn
<hazmat> for a net split its more significant
<hazmat> or any long running outage
<SpamapS> yeah, so at the moment, ensemble is not partition tolerant. ;)
 * SpamapS disappears
<hazmat> SpamapS, have a good weekend
<jimbaker`> how do i push a branch that's a branch of lp:~ensemble/ensemble/ftests ? i tried this, but it's not right: bzr push --stacked-on lp:~ensemble/ensemble/ftests --remember lp:~jimbaker/ensemble/ftests/generate-html
<jimbaker`> niemeyer, ^^^ ?
<_mup_> ensemble/local-unit-deploy r358 committed by kapil.thangavelu@canonical.com
<_mup_> use ensemble home instead of units directory for all unit deployments
<hazmat> jimbaker`, i think you just push to ~jimbaker/ensemble/generate-html
<hazmat> its on the merge side you pick the series branch afaik
 * hazmat checks
<jimbaker`> hazmat, ok, i was wondering why it was wanting to use the wrong series
<hazmat> yup
<hazmat> jimbaker`, so on merge the target branch is ensemble/ftests
<hazmat> there all branches of the same project for the user project namespace
<hazmat> s/their
<_mup_> Bug #839969 was filed: Checkout Ensemble branch, run bash ftests, and generate summary in HTML <Ensemble:In Progress by jimbaker> < https://launchpad.net/bugs/839969 >
<hazmat> jimbaker`, is the ftest stuff using jenkins for test running?
<jimbaker`> hazmat, no
<hazmat> just wondering if we could submit tests for branches in dev
<hazmat> via an api
<hazmat> w/ the security integration tests get about 30% slower out of the box on average...
<jimbaker`> tests will get merged as normal i would think. at some point, we will have jenkins integration, it would be simple/trivial to do
<hazmat> jimbaker`, i guess i'm confused what's running the tests? your not using jenkins or buildbot?
<jimbaker`> i'm using two new tools: butler.py, which i just wrote; and niemeyer's churn, which executes bash scripts
<jimbaker`> a butler used to manage the buttery, which might store the results of the churning...
<jimbaker`> and of course jenkins is presumably a butler. whatever ;)
<hazmat> not sure i understand why.. but okay
<jimbaker`> hazmat, at some point jenkins is presumed. twisted trial however is definitely not going to be used
<jimbaker`> the reason is that we need something that is focused on running ensemble commands
<hazmat> jimbaker`, i assume the test runner would be doing normal tests and ftests
<hazmat> is that not accurate?
<jimbaker`> that's not accurate
<jimbaker`> it's only running ftests, which are to be only be written using bash
<jimbaker`> that's niemeyer's decision, and it makes sense
<hazmat> from a purpose of ease of test dev for users probably
<jimbaker`> such test scripts could potentially use resources that are written in python, eg to validate results or to wait for resources
<hazmat> still i seems like conflating issues, an ftest suite and test suite, and the test bot 
<jimbaker`> hazmat, perhaps, but it's hopefully what was requested by niemeyer. again i think it's pretty simple, and it will integrate well w/ jenkins eventually, just need to emit junitxml instead of the summary html file
<hazmat> jimbaker`, fair enough.. we need a test runner for the ftests that does reports
<jimbaker`> and that's the merge proposal :)
<hazmat> bcsaller, do you have a moment to catch up?
<bcsaller> hazmat: sure do
<hazmat> bcsaller, ~hazmat/ensemble/local-unit-deploy is the latest in the pipeline
<bcsaller> hazmat: thanks
<niemeyer> jimbaker`: lbox propose --for lp:ensemble/ftests
<niemeyer> Sorry, single dash
<niemeyer> jimbaker`: lbox propose -for lp:ensemble/ftests
#ubuntu-ensemble 2011-09-03
<niemeyer> jimbaker`: Run this within the branch directory and it'll do the whole magic
<marrusl> hey folks...  so if I set "default-instance-type: m1.large" ...  what should I do to change the default AMI to 64-bit?
<marrusl> should I dig up the ami number manually?  or can I set the architecture in environments.yaml?
<marrusl> I mean "dig up the ami # manually and set default-image-id?  or..."
<_mup_> ensemble/lib-zookeeper-base r316 committed by gustavo@niemeyer.net
<_mup_> Merge base for lib-zookeeper.
<hazmat> marrusl, it looks like you need to manually select the image for 64 bits
<marrusl> hazmat, ok.  in any case I did end up doing that.  no problems.  thanks for the confirmation.
