#ubuntu-ensemble 2011-06-13
 * al-maisan -> office
<kiranm> 'ensemble status' always shows me connection timeout for me. Pasted the output here: http://paste.ubuntu.com/625813/
<niemeyer> Good morning everyone
<kim0> o/
<al-maisan> Good morning niemeyer :)
<niemeyer> al-maisan, kim0: Hey there!
<_mup_> ensemble/trunk r252 committed by gustavo@niemeyer.net
<_mup_> Merge add-python-yaml branch by Clint [r=niemeyer] [trivial]
<_mup_> This adds the missing python-yaml package dependency.
<SpamapS> Hey guys. How would you all feel about moving everything except 'ensemble' out of /usr/bin ?
<SpamapS> Otherwise... W: ensemble: binary-without-manpage usr/bin/open-port
<niemeyer> SpamapS: Hmm
<niemeyer> SpamapS: I think we should provide these man pages eventually
<niemeyer> SpamapS: /usr/bin feels to me like the write place for these tools
<SpamapS> are any of those ever useful outside of hooks though?
<niemeyer> SpamapS: But then, I don't claim to understand the FS organization fantastically
<SpamapS> If no, then they're really more suitable in /usr/lib
<niemeyer> SpamapS: Today, they're not, but I hope all of these tools are runnable out-of-band in the future
<niemeyer> SpamapS: So we can do actions based on events not initiated by Ensemble itself
<SpamapS> Ahh, thats enough to make me think they need man pages then.
<SpamapS> Though it might be better to have one command which tells the local agent to do things.
<niemeyer> SpamapS: That may be a good idea.. that said, I still like how e.g. "relation-get foo" is concise
<SpamapS> Yeah, my thinking is that we're just going to dump all over the global /usr/bin namespace.
<niemeyer> Ugh.. past lunch time.. biab
<SpamapS> hmm.. should -e / --environment be changed to a global option, instead of one for the commands?
<SpamapS> Whats the exact policy on trivial changes to the trunk? The online help for remove-relation says that it adds a relation.
<niemeyer> SpamapS: If it's truly trivial and you're sure that's what should be in place, you can "cowboy" the change in
<niemeyer> SpamapS: Marking it as [trivial] in the commit line
<niemeyer> SpamapS: Many times these will also be preceded by an "on-the-fly" review request
<niemeyer> SpamapS: E..g just post the diff somewhere and invite a review
<niemeyer> (here)
<niemeyer> SpamapS: Your example sounds good for a trivial, FWIW
<jimbaker> bcsaller, hazmat, niemeyer - standup?
<niemeyer> jimbaker: I'd rather see some _action_ happening!
<jimbaker> niemeyer, ok
<koolhead17> hi all
<niemeyer> koolhead17: Hey there
<niemeyer> koolhead17: Noticed you picked a formula to work on.. that's pretty cool, thanks!
<niemeyer> koolhead17: How're things going there so far/
<niemeyer> ?
<koolhead17> niemeyer: :)
<koolhead17> hoping to get it done by tonight!!
<koolhead17> kim0: ping
<_mup_> ensemble/set-transitions r239 committed by bcsaller@gmail.com
<_mup_> fixed issue with merge of watch establishment in unit agent
<SpamapS> koolhead17: You chose the phpmyadmin formula right?
<koolhead17> SpamapS: yes
<SpamapS> koolhead17: cool, its going to need some help from the mysql formula, but I already started on it.
<koolhead17> SpamapS: yes. we need apache2/php5 as well :P
<SpamapS> koolhead17: no, thats all in your formula in the install hook
<SpamapS> koolhead17: what you need is an admin user that can read/write to/from all dbs.
<koolhead17> SpamapS: hmm. am installing phpmyadmin along with depandency to come up with list of depandent pkgs which will go as depandant hook
<koolhead17> on one of natty running vm on virtualboc
<koolhead17> x
<SpamapS> koolhead17: right, but you want to administer a mysql database for some other app, right? like, you want to admin the database for wordpress.
<SpamapS> koolhead17: otherwise its just phpmyadmin with its own database
<koolhead17> SpamapS: i thought this is a formula 4 a standalone phpmyadmin configuration. please correct me if am wrong 
<SpamapS> koolhead17: Its a good start. However, people can do that without ensemble.
<koolhead17> now to create DB for a cms say drupal/joomla do we need a web-interface?
<SpamapS> koolhead17: ensemble is interesting because it can relate these services together. :)
<koolhead17> :D
<koolhead17> we can do that via appropriate hooks while writing formula for jooma/drupal/wordpress
<koolhead17> isnt it
<koolhead17> :P
<koolhead17> there we can have mysql formula as depandency
<SpamapS> koolhead17: so you'll be able to say 'ensemble add-relation phpmyadmin:db wiki-db:db-admin .. and that will give you access to all the databases on wiki-db.
<koolhead17> SpamapS: but i thought phpmyadmin has more to do with checking web-interface based mysql db :)
<koolhead17> would we really need a formula like that as you suggsted!! :D
<SpamapS> koolhead17: usually phpmyadmin is used to do queries on a database from some other application.
<SpamapS> koolhead17: I don't see it being all that useful with just an empty database.
<koolhead17> SpamapS: hmm!! your right
<koolhead17> SpamapS: let me come up with the basic formula with hooks to install phpmyadin and then in 2nd level/round i can work on this
<SpamapS> koolhead17: thats a great way to do it.
<koolhead17> it will be easier 4 me to pick up ensemble in that case :P
<SpamapS> koolhead17: install hook is usually first, then you add the relations. :)
<koolhead17> SpamapS: yes. writing down all the pkgs saperately while testing a install in virtualbox :)
<SpamapS> koolhead17: you shouldn't need to do much more than 'apt-get install phpmyadmin'
<SpamapS> koolhead17: you will probably want to say '--no-install-recommends'
<koolhead17> SpamapS: hmm 
<SpamapS> actually no
<SpamapS> it only suggests mysql-server
 * koolhead17 is running the command
<SpamapS> so you can just do 'apt-get -y install phpmyadmin' .. it will automatically pull in php, apache, mysql client, etc. etc.
<koolhead17> hmm
<koolhead17> including apache2 and all
<koolhead17> SpamapS: it also asks two options during install
<koolhead17>  Web server to reconfigure automatically: <-- apache2
<koolhead17>  
<koolhead17>  Configure database for phpmyadmin with dbconfig-common?  
<koolhead17> also we need to add "Include /etc/phpmyadmin/apache.conf" in   /etc/apache2/apache2.conf to get it to work :D
<SpamapS> koolhead17: during an install hook, it will skip those
<SpamapS> koolhead17: thats what you do during the install hook.
<koolhead17> SpamapS: hmm
<SpamapS> koolhead17: read the wordpress and mysql examples, they handle it
<koolhead17> SpamapS: http://fewbar.com/2011/06/so-what-is-ensemble-anyway/
<koolhead17> this one
<SpamapS> koolhead17: thats my post, not an example. ;) I mean in /usr/share/doc/ensemble/examples
<koolhead17> SpamapS: :P
<koolhead17> k
<koolhead17> SpamapS: phpmyadmin is not dependant on mysql-server
<SpamapS> koolhead17: right, it only 'suggests' mysql-server
<koolhead17> we need to install mysql-server too for a new installation in order to  get phpmyadmin working :D
<SpamapS> No!!
<SpamapS> The mysql server can live on another machine.
<koolhead17> SpamapS: hmm.
<koolhead17> also is there an option in apt-get which results in skipping the question part, like skipping option for username/passwd
<koolhead17> and all
<SpamapS> -q
<SpamapS> -yq actually
<koolhead17> SpamapS: thanks. 
<SpamapS> koolhead17: if you read that post you linked to, it deploys mysql as its own host.. it then relates mediawiki to it. Your phpmyadmin formula must relate to the database server using hooks.
<koolhead17> hmm
<SpamapS> koolhead17: have you looked at the example mysql and wordpress formulas yet?
<koolhead17> SpamapS: yes
<koolhead17> i also have drupal formula here
<koolhead17> it will be similar
<SpamapS> koolhead17: there are already 2 other drupal formulas in progress...
<koolhead17> okey
<SpamapS> koolhead17: ok I'm not sure if you understand the examples if you are confused by my statements. Only one formula ever needs to install mysql-server.
<SpamapS> DOH.. I just now noticed my last 'ensemble shutdown' a week ago didn't work right and left 11 m1.smalls running for a week
<SpamapS> DOH DOH DOH
<robbiew> lol
 * SpamapS decides tonight he will hack out an LXC provider.. 
 * robbiew makes sure to review the next expense report from SpamapS
<SpamapS> robbiew: its 1 *meeeeelion* dollars
<SpamapS> ok it was actually 7 m1.smalls and 4 t1.micros
<SpamapS> that should only be like, $75
<koolhead17> hello robbiew
<koolhead17> SpamapS: db-relation-changed what is this hook for
<robbiew> hey koolhead17
<koolhead17> in the wordpress example? assigning DB?
<SpamapS> koolhead17: When you relate the wordpress service to the mysql service, that is the hook that gets run
<koolhead17> SpamapS: cool
<SpamapS> koolhead17: if you haven't read this yet, you really should: https://ensemble.ubuntu.com/docs/write-formula.html
<koolhead17> SpamapS: i have that page open, drupal example one. i think kim0 wrote it :)
<m_3> SpamapS: you totally just gave me one of those 'oh crap' moments... but everything's safe and shutdown
<SpamapS> m_3: I am 99% sure I just typed it fast before slamming the laptop shut.. it probably never completed
<niemeyer> m_3!
<niemeyer> m_3: Welcome!
<m_3> Hi Ensemble team
<hazmat> SpamapS, ouch!
<m_3> nice to meet everyone
<m_3> niemeyer: thanks!
 * koolhead17 is scared, the cats are making scary noise outside
<SpamapS> hazmat: Hey, I've had debug-log disconnect on me twice now
<hazmat> SpamapS, disconnect? what's it say on the terminal when that happens?
<SpamapS> http://paste.ubuntu.com/626112/
<SpamapS> hazmat: ^^
<SpamapS> sweet..
 * SpamapS just added a db-admin interface to mysql so services can be related with 'all privileges' ..
<hazmat> SpamapS, ah.. yeah.. this is the session event stuff, i've got a branch of txzookeeper which will resolve (https://code.launchpad.net/~hazmat/txzookeeper/session-event-handling), but its pending some additional testing work for setting up zk test clusters
<SpamapS> ahh ok
<SpamapS> same thing that killed my munin nodes?
<hazmat> SpamapS, yup, once that's merged that should resolve this class of issues
<hazmat> its basically zk client letting the app know about transient network connect/disconnects
<koolhead17> SpamapS: -yq  option doesn`t seem to work :(
<hazmat> bcsaller, jimbaker if you one you have time, could you please check out niemeyer's debug-hooks-fixes? its pretty critical to having that feature work well, and needs an additional review
<bcsaller> sure
<niemeyer> Danke!
<_mup_> ensemble/set-transitions r240 committed by bcsaller@gmail.com
<_mup_> better error message when passed non-key/value pair on cli
<pindonga> hi everyone! I have been playing around a little with ensemble 
<pindonga> I'm trying to create a provider for lxc :)
<pindonga> I got as far as breaking everything, and slowly starting to put the pieces back in place :p
<hazmat> pindonga, cool :-)
<pindonga> so I can now create a lxc container, start it, and am stuck when ensemble tries to ssh into it
<pindonga> because it didn't yet install the zookeeper stuff into the container
<pindonga> can you guys point me to the code that installs the necessary packages into the bootstrap instance?
<hazmat> pindonga, ah.. you mean the ssh succeeds but the client can't connect to via the tunnel because zk isn't running
<pindonga> hazmat, ensemble tries to connect via ssh to port 281xx 
<pindonga> but it fails
<hazmat> pindonga, its using cloud-init.. see ensemble/providers/ec2/launch.py for the existing stuff
<hazmat> pindonga, you can seed  cloud-init by dropping a file on disk as well instead of the ec2 metadata service
<pindonga> I believe ensemble should do something like apt-get or dpkg to install zookeeper into the bootstrap instance, right?
<niemeyer> Coffee!
<hazmat> pindonga, it should but at the moment its relying on the image to provide it.. installing it at runtime was overly time consuming (ie. java stack from scratch).... so it got yanked from the cloud-init install instructions to part of image building.. we're planning  on yanking that though and going back to cloud-init based installation of zk
<hazmat> pindonga, its a pretty small change to reenable it.. effectively just adding a apt-get install zookeeper to the cloud init commands list
<pindonga> hazmat, k, will look for it
<pindonga> so right now ensemble cannot bootstrap from base install, as I understand?
<hazmat> pindonga, it can, its just not configured that way atm.. actually let me double check something
 * hazmat waits for ec2 bootup
<hazmat> pindonga, so it can bootstrap a regular image.. we just pre-install for speed
<hazmat> pindonga, so ensemble/providers/common.py has the standard packages and repos we use as constants, ensemble/providers/ec2/utils.py has format_cloud_ini (which should get refactored into the previous common.py since its now generic).. and then ensemble/providers/ec2/launch.py will setup additional things for installation based on bootstrap node vs. machine node.
<hazmat> in terms of getting cloud-init setup with with an lxc-provider, the cloud-init needs to be preseeded on disk before starting up the instance, as an example with (assuming natty) http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/seed/README
<pindonga> thanks, will look into it
<hazmat> pindonga, cool, you might want to talk to SpamapS if you want to collab as he mentioned some interest in working on an lxc-provider as well
<hazmat> and keep the questions coming ;-)
<pindonga> hazmat, right now I'm just doing the minimum to get it up and running 
<pindonga> no intention to get production code until I understand how this works :)
<hazmat> bcsaller, re the branch config-set.. was there a merge conflict with this branch, i'm seeing some oddities in a diff against trunk?
<bcsaller> hazmat: the config-set that was merged? or the later branch I emailed you about?
<hazmat> bcsaller, the emailed branch
<bcsaller> hazmat: I had a divergent branch issue between this and my other laptop from when this one was being repaired. That said its a few revisions back and I thought dealt with 
<hazmat> bcsaller, as an example there's a test_hook_knows_service that looks like it got yanked in the branch.. some of other oddities are just relocating code chunks
<bcsaller> hmm
<bcsaller> hazmat: ahh, I think I know what happened there, That was code that was deprecated after a review but by the time it was ready for merge jim's branch was already depending on some of that code 
<bcsaller> so I added those parts back in  (though my branch already contained the diff to remove them from the previous review)
<hazmat> bcsaller, ah.. ic.. okay.. i'm having a look at it now
<bcsaller> appreciate it 
<hazmat> bcsaller, could you pastebin errors from a test run?
<bcsaller> hazmat: sure, are you not getting errors?
<hazmat> i get a pretty constant set of 4 error
<hazmat> over 3 full test runs
<bcsaller> and if you run those alone do you still see them?
<hazmat> bcsaller, haven't tried yet.. i just want to make sure we have the same error set to verify them and ensure we have sane local setup deltas
<hazmat> bcsaller, is that number inline with your seeing?
<bcsaller> that time I only got ensemble.agents.tests.test_unit.UnitAgentResolvedTest.test_hook_error_on_resolved_retry_remains_in_error_state
<bcsaller> but it might change if I run it again (which I am)
<hazmat> bcsaller, yeah.. one of these tests is definitely bleeding into the others
<bcsaller> hazmat: yeah, that time it was a different one
<hazmat> bcsaller, one review comment on the change.. it would be better to just reconstruct the service state as needed rather than passing it as a new construct arg to scheduler, relhookcontext, etc
<hazmat> ie. the rel hook context has a client and can reconstruct the state on demand
<bcsaller> hazmat: doesn't the service name then need to be parsed from the unit_name
<hazmat> bcsaller, it does, but that's a pretty trivial
<bcsaller> I can make those changes 
<hazmat> i think the service-state-manager could grow a utility method for it as well
<hazmat> return the service state given a unit name
<bcsaller> makes sense,  yes
<daker> yo
<niemeyer> daker: Yo
<niemeyer> daker: So,
<niemeyer> daker: "--password=" < that's not going to work if you split the arguments 
<niemeyer> daker: You can drop the "=" sign there
<niemeyer> daker: Then they can be in separate arguments
<niemeyer> daker: This won't do what you want as well:
<niemeyer> "< %(ensemble_sql)s" % config)
<daker> should be splited ?
<niemeyer> daker: It's a bit trickier in this case
<niemeyer> daker: When you provide something like that to the shell, what's happening is that the file descriptor for stdin is being replaced to that of the open file with the given name.
<niemeyer> daker: Have you considered using bash for your script?
<niemeyer> daker: In case that's what you're most comfortable with?
<daker> i am comfortable with python, bash was suggested by kim0
<niemeyer> daker: Aha, ok
<daker> niemeyer, i have an idea
<niemeyer> daker: You may want to use -e
<niemeyer> daker: Rather than stdin
<daker> i'll just use MySQLdb
<niemeyer> daker: Otherwise you'll have to open a file and extend your do() function to replace stdin
<niemeyer> daker: Ah, that works too! :)
<SpamapS> niemeyer: so the idea of using relation id is actually good
<SpamapS> niemeyer: thinking through it, using service name for database names is actually VERY limiting.
<niemeyer> SpamapS: I'm not entirely sure yet, to be honest
<SpamapS> niemeyer: otherwise we can't have two services use the same database.
<niemeyer> SpamapS: Agreed, for that kind of use case a relation id would be great
<niemeyer> SpamapS: The question is more around the lines of reestablishment
<SpamapS> niemeyer: so I think we need to have the consuming services provide us with a clear hint. It may usually be service name.
<niemeyer> SpamapS: It isn't clear that what we want to happen when one removes and readds a relation
<niemeyer> s/that what/what/
<SpamapS> niemeyer: IMO, thats up to the formula to document.
<niemeyer> SpamapS: I think we need to guide people here
<niemeyer> SpamapS: So that formulas implement it somewhat consistently, and follow a principle of least surprise
<SpamapS> niemeyer: I was thinking of mapping relation-id to the requested context like "demo-wiki" .. then if something else asks for demo-wiki, give them access.
<niemeyer> SpamapS: I don't get that last idea
<SpamapS> niemeyer: can add an optional setting of 'maxrelations' for each database context too.. so you can say 'maxrelations=1' if you want to have exclusive databases.
<SpamapS> niemeyer: so the relation-id is just a uuid, so there'd be a table..   relation-id: 11004-33-55-5-666, data-name: 'wiki'
<SpamapS> niemeyer: when we break the relation, we remove that row. When somebody asks for 'data-name=wiki' we give them access to the 'wiki' database.
<SpamapS> err
<SpamapS> meh I'm all out of order.
<SpamapS> niemeyer: basically use the relation-id to map to the data we care about
<niemeyer> SpamapS: I don't understand how that's better than what we have now?
<SpamapS> niemeyer: the only change is that the consuming formulas specify the database name, rather than the consuming service's name..
<SpamapS> niemeyer: so that you can say, have a backend hadoop job querying the same database as the frontend webservers.
<SpamapS> right now you can't do that
<niemeyer> SpamapS: We already have that.. that's essential what relations do
<niemeyer> SpamapS: They have a name, for precisely that reason
<niemeyer> SpamapS: and that's already possible today (sharing the db between multiple services)
<SpamapS> right now, if I spawn two mediawiki services.. and relate them both to a mysql service, they will get independent databases...
<SpamapS> I may want them to share a db
<SpamapS> yeah, thats a formula problem. :)
<SpamapS> niemeyer: yes I guess I'm saying we should do that sooner rather than later... the relation-id is a separate issue.
<niemeyer> SpamapS: Do what sooner?
<SpamapS> niemeyer: change the mysql formula to accept the database name rather than assume it.
<SpamapS> niemeyer: and as far as the relation-id, we can use that to know which relation maps to which database name.
<niemeyer> SpamapS: I'd like to see a use case we can work on, first
<SpamapS> as long as its available in all hooks
<SpamapS> niemeyer: the mysql use case isn't good enough?
<SpamapS> during broken, I have no idea what resources to clean up
<niemeyer> SpamapS: Sorry, I may have missed it.. what's the use case again?
<SpamapS> it was in the bug you duped IIRC
<niemeyer> SpamapS: Ah, ok.. that's about being able to identify relation ids, so that re-adding a relation to the same set of formulas can reuse resources.
<niemeyer> SpamapS: That's not the same as manually specifying database names
<SpamapS> Its actually more than that. Iduring broken, right now, there is no way to tell which relation was broken other than a hack where you run 'relation-list'
<niemeyer> SpamapS: Why do you want to tell which relation has been broken?
<niemeyer> SpamapS: I mean.. why do you want an identifier in that location?
<SpamapS> Because I want to revoke the user privileges, delete temp tables, etc. etc. The documented reasons for the broken relation are to clean up.
<niemeyer> SpamapS: Hmmm..
<niemeyer> SpamapS: Removing tables is not a good idea in general, but I see your point otherwise
<SpamapS> notice I said *temp*. :)
<niemeyer> Cool, ok :-)
<SpamapS> I happen to agree that purging should be primarily manual. :)
<niemeyer> SpamapS: Yeah, I think for _that_ the relation id in that ticket should handle it
<SpamapS> Or at least married with a backup service.
<niemeyer> SpamapS: THe issue is what do you expect when you re-add a relation to the same couple of services?
<SpamapS> Right, for that, we definitely need to set an example.
<SpamapS> I like the way the mysql formula handles that. If somebody requests an existing database, go ahead and give them access.
<niemeyer> SpamapS: I'm starting to think the mysql formula should create a database after the remote service name, as it is doing right now
<niemeyer> SpamapS: and we need a relation id
<SpamapS> and never allow shared access to a single db?
<niemeyer> SpamapS: That's a different use case.. let's handle it separately please
<niemeyer> SpamapS: We already handle shared access
<niemeyer> SpamapS: All the service units from the same service will access the same database
<SpamapS> But I want two services to be able to share data through mysql.
#ubuntu-ensemble 2011-06-14
<niemeyer> SpamapS: Can we handle that separately?
<niemeyer> SpamapS: That's a different use case from the problem we were talking above
<SpamapS> Sure, but its going to come up with any complex apps.
 * SpamapS sticks it in the "forget this today, remember tomorrow" bin
<niemeyer> SpamapS: Hopefully we'll have _many_ problems to solve! ;-)
<SpamapS> I'm ok with adding a 'shared-db' relation on the mysql formula when the time comes.
<SpamapS> interface: mysql-shared
<niemeyer> SpamapS: It's just that it's not productive to try to solve orthogonal problems at the same time
<SpamapS> I wasn't aware that we were trying to solve another problem. :)
<niemeyer> SpamapS: You have mentioned two things you'd like to do:
<niemeyer> SpamapS: 1) I can't tell what is the right database to clean up on relation-broken
<niemeyer> SpamapS: 2) I want to share a database between multiple services
<niemeyer> SpamapS: These are orthogonal problems, with potentially independent solutions
<niemeyer> SpamapS: If you want to switch over to (2) and leave (1) unsolved for the moment, I don't think a shared-db relation is the right thing to do.
<niemeyer> SpamapS: This doesn't scale.. now I have two pairs of services that should use the same database
<niemeyer> SpamapS: But each pair should use a different one
<niemeyer> SpamapS: They can't all be put in the same shared-db relation
<SpamapS> niemeyer: I see, the 2nd I'm not all that interesting in solving today, no. the 1st, I think I agree 100% that a relation-id, exposed to all hooks, solves the issue.
<niemeyer> SpamapS: Cool, that's awesome
<niemeyer> SpamapS: It's already scheduled for this milestone, so let's push it forward!
<_mup_> ensemble/expose-provision-service-hierarchy r282 committed by jim.baker@canonical.com
<_mup_> Spike on removing the observation around watches being stopped
<niemeyer> Alright, and that's dinner time ticking here
<niemeyer> I'll step out.. have a good time all!
<jimbaker> niemeyer, take care
<_mup_> ensemble/expose-provision-service-hierarchy r283 committed by jim.baker@canonical.com
<_mup_> Fixed remaining non corner case tests
<_mup_> ensemble/expose-provision-service-hierarchy r284 committed by jim.baker@canonical.com
<_mup_> Removed unnecessary splitting of callbacks from their watch functions
<_mup_> ensemble/expose-provision-service-hierarchy r285 committed by jim.baker@canonical.com
<_mup_> Cleaned up watch_exposed_flag callback
<_mup_> ensemble/expose-provision-service-hierarchy r286 committed by jim.baker@canonical.com
<_mup_> Naming cleanup and other style changes
<_mup_> ensemble/expose-provision-service-hierarchy r287 committed by jim.baker@canonical.com
<_mup_> Ensure testing does not get confused with background activity around watch removal
<_mup_> ensemble/expose-provision-service-hierarchy r288 committed by jim.baker@canonical.com
<_mup_> Removed unnecessary guard from watch_exposed_flag
<kim0> morning everyone
<daker> anyone knows how to execute a sql file with MySQLdb in python ?
<kim0> daker: why not execute mysql < file.sql
<daker> woo my formula is working now :D
<niemeyer> Greetings
<m_3> morning
<kim0> morning
<kim0> niemeyer: Is the ppa going to always be daily build from trunk ? 
<niemeyer> kim0: It should
<niemeyer> kim0: You mean it's not working, or are you wondering about the future?
<kim0> future :)
<kim0> niemeyer: and universe should be an older and better tested version ?
<kim0> for 11.10 at least
<kim0> I'm doing the zero to ensemble screencast .. and was wondering how to frame it
<niemeyer> kim0: Yeah, in the future we'll have more stable point-in-time releases for sure
<niemeyer> kim0: We're just moving fast for now, and deploying important features very often
<kim0> ok
<niemeyer> kim0: So it makes more sense to have these well tested and having people able to play with them than freezing
<daker> anyone want to test my formula ?
<kim0> daker: is it working well for ya :)
<daker> yep
<kim0> woohoo
 * kim0 hugs daker 
<kim0> I'll definitely play with it tonight
<kim0> daker: I'm sure we're interested in your overall experience developing it
<kim0> let us know any thoughts you have
<niemeyer> daker: Yeah, where is it?
<daker> one sec
<daker> niemeyer, bzr branch lp:~daker/+junk/joomla-formula
<niemeyer> daker: Cool
<daker> niemeyer, are you going to test it ?
<niemeyer> daker: Yeah, checking it out already
<daker> to access the admin just type : public_dns/administrator
<daker> login : admin|passe: ensemble
<niemeyer> daker: Cool
<daker> it works ?
<daker> niemeyer, ^
<niemeyer> daker: Sorry, multitasking, but still firing it up
<daker> ok
<niemeyer> daker: Works!
<niemeyer> Sweet
<daker> whooo :D
<niemeyer> daker: We need a way to communicate values such as username/pass to the admin
<niemeyer> daker: For the time being, it may be worth putting these instructions in the description
<daker> yes
<hazmat> niemeyer, alternatively those could be default values for settings, although we would need a corresponding admin cli way of inspecting those
<niemeyer> hazmat: Oh, that sounds much better
<niemeyer> hazmat: We certainly need a way of inspecting the configuration either way
<niemeyer> hazmat: We can have a default type of value which is automatically generated if unset
<niemeyer> This would avoid everyone having the same password by default
<hazmat> niemeyer, that sounds nice as well
<_mup_> Bug #797241 was filed: Ensemble cli subcommand for inspecting config-settings <Ensemble:New> < https://launchpad.net/bugs/797241 >
<_mup_> Bug #797243 was filed: A service config field schema that allows for a  random default value <Ensemble:New> < https://launchpad.net/bugs/797243 >
<niemeyer> hazmat: Thanks for filing these
<hazmat> niemeyer, np. 
<hazmat> niemeyer, out of curiosity have you had any stability issues on natty?
<niemeyer> hazmat: Yes.. I have had to kill compiz every few days
<hazmat> niemeyer, thanks good to know for comparison.. my uptime is averaging about 6-8hrs, even reverted to classic, but still getting random freezes.. just ordered a new laptop hopefully it can help out
<niemeyer> hazmat: Holy crap
<niemeyer> hazmat: 6/8h is on the low side indeed
<hazmat> niemeyer, yeah.. its quite disruptive, it was better during the beta cycle
<niemeyer> hazmat: What kind of issue you notice?
<hazmat> niemeyer, mostly hard X locks, which aren't recoverable. also attaching/detaching external monitors also seems to trigger an unrecoverable state, the old virtual terminal unity --replace trick works sometimes.. but not always.. typically a slew of session errors, some referencing dbus, some referencing device errors.
<niemeyer> hazmat: Hmmm, ok
<_mup_> ensemble/standardize-log-testing r256 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<niemeyer> hazmat: Sounds related to what I see
<niemeyer> jimbaker: Morning
<jimbaker> niemeyer, hi
<niemeyer> hazmat: Generally CTRL-ALT-F1 + compiz kill + unity restart works
<niemeyer> Not nice, though
<hazmat> niemeyer, yeah.. its better than killing the xsession.. i'm curious if there's equivalent to that for classic mode
<niemeyer> hazmat: This is just restarting the window manager, I think
<niemeyer> hazmat: I recall doing that for metacity long ago
<niemeyer> While working on that debug-hook stuff, I'm quite proud of this system we came up with
<m_3> niemeyer: I've got lockup problems with natty/compiz/unity too... half-minute lockups every time I do something compiz-fancy like alt-tab or switch viewports
<niemeyer> hazmat: jimbaker: Do you want to have a quick look at these doc changes before I commit as part of the debug-hooks branch: http://paste.ubuntu.com/626637/
 * hazmat checks it out
<jimbaker> niemeyer, sure
<niemeyer> Ben correctly requested the changes during review
<SpamapS> daker: reading your joomla formula now. :)
<_mup_> Bug #797263 was filed: ~/.ensemble permissions <Ensemble:New> < https://launchpad.net/bugs/797263 >
<jimbaker> line 47: "At this point," would be better; line 59: maybe better word choice "same way performed";  line 102, "work" instead of "worked", but maybe better word choice there too
<jimbaker> niemeyer, looks good with these changes
<niemeyer> jimbaker: Super, thanks
<hazmat> niemeyer, looks good, there is one bug people will hit following those discussion though, namely that the relation api isn't available yet outside of relation hooks, the relation identifiers is the first step to resolving it. it might be nice to reference the bug as a caveat.. its pretty easy to spot and remove it after the bug is fixed.
<hazmat> s/discussion/documentation
<niemeyer> jimbaker: Just the "worked" I'm going to keep as it is.. it's describing the process of transition and choice we made at a particular point, rather than a statement of how it works _today_
<niemeyer> hazmat: Sounds good.. do you have a link to the particular bug you'd like to see ther?
<niemeyer> e
<jimbaker> niemeyer, ok, i guess the context is clear enough that this is the case
<hazmat> niemeyer, bug 767195
<_mup_> Bug #767195: Ensemble should have hook cli apis to enumerate and interact with all the relations of a unit. <Ensemble:New> < https://launchpad.net/bugs/767195 >
<niemeyer> hazmat: That doesn't seem related
<hazmat> niemeyer, yeah.. its addressing the same issue from a different angle.. which is relation api usage in non relation hooks
<m_3> Have a what-if scenario I'd like some advice on...
<m_3> let's say a mysql formula exposes a config parameter for something like query_cache_size
<m_3> (in config.yaml as I understand)
<m_3> when another formula adds this as a relation
<m_3> and needs to change that config parameter
<niemeyer> hazmat: I see what you mean, but this isn't the actual problem
<m_3> can it do something equivalent to the 'ensemble set' command
<niemeyer> hazmat: Fix Releasing this bug doesn't necessarily enable people to use relation-get there
<m_3> i.e., are there ways to parameterize a relation?
<niemeyer> hazmat: Well, enumerate _and_ interact.. sorry
<niemeyer> hazmat: My bad
<niemeyer> We should probably split that off, though.. there are really two different problems
<niemeyer> m_3: No, it'd probably be better to follow up a bit on the documentation to get a better understanding of how things get together
<m_3> ok, gotcha
<niemeyer> m_3: The goal of configuration is "human oriented" settings
<m_3> niemeyer: ah... ok
<niemeyer> m_3: That includes "ensemble set", config-get, etc
<_mup_> ensemble/standardize-log-testing r257 committed by jim.baker@canonical.com
<_mup_> Remove unnecessary new import
<niemeyer> m_3: Inter-unit/service communication is done via relations
<niemeyer> As an aside, it's really good the fact that the config system we're putting in place is read-only from the formula perspective
<niemeyer> I've seen quite a few people saying "Hey, I want to write to the configuration to communicate between formulas"
<niemeyer> The fact it's read-only guides people to do the right thing naturally
<m_3> niemeyer: so relation-specific configuration has to be either in the relation hooks themselves or passed in from the config of the dependent formula?
<_mup_> ensemble/standardize-log-testing r258 committed by jim.baker@canonical.com
<_mup_> Remove similar unnecessary import
<niemeyer> m_3: relation-specific configuration is always performed by the formulas in the formula hoks
<niemeyer> hooks
<m_3> niemeyer: ok, thanks
<niemeyer> m_3: np
<niemeyer> hazmat: Is this what you meant: http://paste.ubuntu.com/626659/ ?
<niemeyer> hazmat: /Limitations
<hazmat> niemeyer, sounds good... although relation-broken should still work afaicr
<hazmat> its a relation hook
<niemeyer> hazmat: It's not a running relation hook
<niemeyer> hazmat: It's a relation hook, though
<hazmat> the open bug on relation-broken is really about adding the relation name as an additional environment variable to the relation hook.
<hazmat> or the id
<hazmat> hmm.. actually it was getting the related service name into the environment
<niemeyer> hazmat: relation-broken has the same issue as the other hooks.. there's no specific relation on the other side to talk to
<niemeyer> s/specific relation/specific unit/
<hazmat> niemeyer, sure.. but in terms of implementation ... relation-list, relation-get, relation-set should still work in relation-broken
<hazmat> ie. its unrelated to the bug referenced
<niemeyer> hazmat: Hmm,  I see
<kim0> zero to ensemble screencast â http://www.youtube.com/watch?v=qxMhKbDSbOw
<hazmat> kim0, awesome! we should embed onto the wiki frontpage
<niemeyer> hazmat: replaced broken with upgrade there
<hazmat> niemeyer, cool, +1
<_mup_> ensemble/debug-hook-fixes r248 committed by gustavo@niemeyer.net
<_mup_> Fixed documentation to refer to tmux, as requested by Ben. [r=hazmat,jimbaker]
<niemeyer> kim0: Woah
<kim0> \o/
<kim0> pushing on planet-ubuntu 
<niemeyer> kirkland: I suppose you're not that guy: http://twitter.com/#!/bigtexan13
<kirkland> niemeyer: sweet!
<kirkland> niemeyer: you've found my alter ego
<niemeyer> kirkland: A bit unfortunate, to say the least
<niemeyer> kirkland: What's your actual twitter account?  I'd like to retweet your formulas post with actual credits
<kirkland> niemeyer: dustinkirkland
<niemeyer> kirkland: Hah!
<niemeyer> kirkland: Thanks :)
<kirkland> niemeyer: highly unoriginal
<kirkland> niemeyer: i'm working on two more, for AjaxTerm, and screenbin
<niemeyer> kirkland: Sweeeet
<kim0> kirkland: oh that is awesome
<m_3> ok, so what's the party line here... formulas or formulae?
<kim0> we can use those in next irc tuition weeks :)
<kirkland> m_3: i say formulae, as I love my latin ;-)
<kirkland> m_3: oh, and principia is a latin word ;-)
<niemeyer> m_3: We're a heavily distributed company, so we can't really rule that out! ;-)
<m_3> kirkland: ha!
<hazmat> npr kojo n. show is doing right now on cloud computing
<niemeyer> I personally Go with formulas, because that matches the Portuguese word as well
<kirkland> m_3: oh, you need to meet negronjl 
<niemeyer> and the "ae" termination is alien to me
<niemeyer> and I suck at languages and am lazy.. all of these
<kirkland> m_3: he's written some puppet modules for deploying hadoop;  would be a great place to start your ensemble/hadoop work
<m_3> kirkland: great thanks
<m_3> kirkland: yeah, just finished with travel stuff and basic canonical setup stuff (although I question if I'll ever get mumble set up properly)
<hazmat> yeah.. i'd vote for formulas as well.. no need for additional latin obscurity ;-)
<kirkland> m_3: heh
<m_3> kirkland: just branched ensemble and am digging through code
<m_3> niemeyer: and docs!
<negronjl> hi m_3
<niemeyer> m_3: Yeah, I guess we'll have to do some voting there.. :-)
<m_3> hi negronjl, great to meet you
<negronjl> m_3:  nice to meet you.
<m_3> negronjl: Dustin sent me some repo links a couple of days ago... lemme dig
<negronjl> m_3:  ping me when you get settled and if you have questions.  I am sure that we should be able to port the orchestra-modules to ensemble formulas relatively quickly ( hadoop one comes to mind )
<m_3> negronjl: awesome man... thanks!
<negronjl> m_3:  np
<_mup_> ensemble/trunk r253 committed by gustavo@niemeyer.net
<_mup_> Merged debug-hook-fixes branch [r=hazmat,bcsaller,jimbaker]
<_mup_> This branch fixes a number of problems in the debug-hooks functionality,
<_mup_> and switches to using tmux for solving some of them. For instance:
<_mup_> - joined and departed hooks are now valid
<_mup_> - Sometimes screen was being fired with the ubuntu user
<_mup_> - Sometimes screen was firing two independent sessions with the same name
<_mup_> - The exit handler was only called on HUP
<_mup_> - If the hook.sh shell died for whatever reason, it would hang forever
<m_3> negronjl: I was planning on starting with something simple like hdfs or zookeeper from scratch, but now that we have the orchestra modules, I'll start with those.
<niemeyer> Well written spam.. got through the spam filter, and I almost thought it was a real one
<niemeyer> This was the issue, though: PS should I speak with someone else at niemeyer?
<negronjl> m_3:  sure.  let me know if you have questions about it.
<negronjl> m_3:  the plan is, at some point, to port the modules into formulas so both projects benefit from them.
<m_3> negronjl: thanks! I'll hit you up for a higher-bandwidth conversation about it after I dig through them
<kim0> Oh we did hit linuxtoday yeeha .. I did submit it, but somehow missed it when they published it
<kim0> http://www.linuxtoday.com/it_management/2011060900341NWSVUB
<_mup_> ensemble/expose-provision-service-hierarchy r289 committed by jim.baker@canonical.com
<_mup_> Better logging
<niemeyer> kim0: Do you have a blog post for the video yet?
<kim0> niemeyer: yeah pushed
<kim0> http://cloud.ubuntu.com/2011/06/zero-to-ensemble-in-5-mins/
<kim0> niemeyer: ^
<niemeyer> kim0: Awesome, will tweet that
<kim0> great thanks
<niemeyer> kim0: Thank you!
<niemeyer> kim0: Just watching it now
<kim0> oh if it's horrible we can do others to replace it :)
<niemeyer> kim0: So far it's awesome :-)
<niemeyer> kim0: Very smooth and easy going
<kim0> glad you like it
<niemeyer> kim0: Yeah, brilliant stuff
<kim0> woohoo great
<_mup_> ensemble/expose-provision-service-hierarchy r290 committed by jim.baker@canonical.com
<_mup_> PEP8, comments, demoted log levels
<_mup_> ensemble/expose-provision-service-hierarchy r291 committed by jim.baker@canonical.com
<_mup_> Cleanup wrt review points
<kim0> btw the docs at the installation from ppa step .. is missing a "sudo apt-get update" .. should I file a bug for that ;)
<niemeyer> kim0: In a next screencast, it might be worth pointing out that both formulas use the same base image, and that it's the formula itself that defines how it works
<niemeyer> kim0: Also that we'll enable multiple formulas in an image, etc
<niemeyer> kim0: Erm, multiple formulas in a machine
<niemeyer> kim0: The perfect is the enemy of the good enough, though.. it's fantastic to have _something_ out there
<kim0> Yeah, this one only scratches the surface ..
<kim0>  there will be deeper dives
<niemeyer> kim0: We have to attempt to emphasize the points that really make it stand out
<kim0> yeah agree .. I'll make sure to mention that next time indeed
<jimbaker> niemeyer, do you want to have standup today?
<jimbaker> regardless, i'm pushing on getting my outstanding branches fixed with respect to their reviews
<niemeyer> jimbaker: Have you read the email related to standups from yesterday?
<jimbaker> niemeyer, reading it now
<niemeyer> jimbaker: Hold on, I'll add an extra entry there:  - Read your email. ;-)
<jimbaker> niemeyer, ok :)
<koolhead17> hi all
<koolhead17> kim0: thanks!! :)
<koolhead17> SpamapS: hey
 * kim0 nods
<kim0> koolhead17: have fun :)
<koolhead17> :P
<niemeyer> jimbaker: On expose-watch-exposed-flag, you don't have to move it to WIP
<niemeyer> jimbaker: It's approved, and still pending another review
<niemeyer> koolhead17: Hey there
<koolhead17> niemeyer: hello :)
<jimbaker> niemeyer, ok - i just wanted to indicate i'm making some small changes in response to the review points, which also address downstream needs, and they are about to land...
<niemeyer> jimbaker: That's fine.. but by moving it to WIP you remove any chance of someone else looking at your branch at that time
<jimbaker> niemeyer, understood - i will be more careful for sure
<niemeyer> jimbaker: Sometimes it's worth moving it to WIP, when the changes are pervasive and you'd rather have the next reviewer looking at the new version instead
<niemeyer> jimbaker: For this branch, that's not the case since it was basically a +1 with mionrs
<niemeyer> minors
<jimbaker> this is one aspect of the new review process i was not aware of, so basically leave it in the state left by the reviewers, unless otherwise negotiated
<jimbaker> stating it this way describes the continuity of our review process, of course :)
<niemeyer> jimbaker: It really depends on the intention
<niemeyer> jimbaker: If you want to get a second review, don't move it to WIP
<niemeyer> jimbaker: If you want the second review to look at the new version because there's something important/significant there, move it to WIP
<jimbaker> niemeyer, got it
<kirkland> hey guys ... when writing a formula that requires user input at deploy
<kirkland> ie, i need to prompt the user to choose a password
<kirkland> where does this go?
<kirkland> i need to obtain it from the user at deploy time, and then some how get it over to the install hook
<hazmat> kirkland, there's some work on service config that's almost done to help with service configurable settings
<kirkland> hazmat: hmm, okay
<hazmat> kirkland, there's a new hook cli api to retrieve the settings, and a new hook that gets invoked when the config changes as part of that work
<kirkland> hazmat: neat
<niemeyer> kirkland: We were also just talking about this today
<niemeyer> kirkland: We'll have a kind of configuration that will create passwords automatically
<niemeyer> kirkland: So that the user doesn't need to intervene during deployment
<kirkland> niemeyer: how is the user informed of the password?
<kirkland> niemeyer: in my case, they need it
<niemeyer> kirkland: Meanwhile, our recommendation is to have a pre-defined password and putting it in the description
<niemeyer> kirkland: They can inspect the settings of any service
<kirkland> niemeyer: hmm, okay
<niemeyer> kirkland: Or, that's the plan, anyway.. config settings is one of the things being worked on right now (and for the past couple of months)
<kim0> aren't formulas just using `pwgen 10 1` to create passwords
<niemeyer> kim0: Slightly different case
<kim0> ah the user needs to be informed of it
<niemeyer> kim0: They could _also_ use that system, actually
<niemeyer> kim0: But the opposite isn't true: the mechanism we're using for inter-formula communication won't help kirkland
<kirkland> niemeyer: https://ec2-184-73-37-114.compute-1.amazonaws.com/
<kirkland> niemeyer: that's what I'm working on
<kirkland> niemeyer: a formula for setting up ajaxterm
<kirkland> niemeyer: for that login to be useful, though, i need the user to have a username/password
<niemeyer> kirkland: hah! Sweet!
<niemeyer> kirkland: Right, makes sense
<kirkland> niemeyer: as unfortunately, ajaxterm does not support ssh keys :-(
<kim0> yummy
<kirkland> niemeyer: so there has to be a user/pass in there
<niemeyer> kirkland: That config idea we mentioned above is perfect for thta
<niemeyer> that
<kirkland> niemeyer: perfect
<kirkland> niemeyer: the config settings you mean?
<niemeyer> kirkland: The user will be able to say something like ensemble get ajaxterm password
<niemeyer> kirkland: Or similar
<kirkland> niemeyer: okay
<niemeyer> kirkland: Isn't ajaxterm just using the real machine users?
<kirkland> niemeyer: yes
<niemeyer> kirkland: If so, you can recommend people to ssh into the machine in the interim, for sorting the password
<niemeyer> kirkland: This works: ensemble ssh $SERVICE/$N
<niemeyer> kirkland: and also: ensemble ssh $N (where $N is a machine number)
<kirkland> niemeyer: cool, will do
 * koolhead17 just executed wordpress formula  :)
 * koolhead17 is happy
 * niemeyer high-fives koolhead17
<koolhead17> niemeyer: :D hehe
<niemeyer> jimbaker, bcsaller, hazmat: Do you have any summary for an Ensemble talk that I could reuse?
<hazmat> niemeyer, i think i sent around my pycon lightning talk notes
<bcsaller> gustavo: I have to prep slides for my upcoming talk, but I don't have anything yet
<niemeyer> hazmat: I mean a summary for a talk to send the event organizers
<bcsaller> gustavo: are you looking for prose or slides?
<hazmat> niemeyer, ah.. hmm
<niemeyer> bcsaller: Just the usual blurb for the schedule
<niemeyer> I can easily come up with something, but laziness made me check if you already had one readily available.
<jimbaker> niemeyer, not yet
<bcsaller> sorry, I don't
<niemeyer> Cool, no worries
<hazmat> niemeyer, this is my unsuccessful submission for plumbers - https://pastebin.canonical.com/48481/
<niemeyer> hazmat: Ah, sweet, thanks!
<_mup_> ensemble/expose-provision-service-hierarchy r292 committed by jim.baker@canonical.com
<_mup_> Removed watch_exposed_flags changes from this branch to move upstream
<kirkland> niemeyer: relation-set ip="$ipaddr" port=443 hostname=`hostname -s`
<kirkland> niemeyer: is that what I need for this byobu/ajaxterm formula?
<kirkland> niemeyer: since the service runs on 443?
<niemeyer> kirkland: Maybe.. a URL might do as well
<niemeyer> kirkland: set url=...
<SpamapS> kirkland: lol, hahaha I should have read here before answering you over there. :)
<kirkland> SpamapS: ;-)
<kirkland> SpamapS: i moved those conversations here, instead of bothering you there ;-)
<SpamapS> kirkland: nice. I've been thinking about building an rsync formula to solve the "shared upload" problem until I can wrap my head around gluster/nfs
<SpamapS> basically just a formula that provides a box which will rsync a dir from all related service units to all related service units
<kirkland> SpamapS: i'm going to follow byobu-web with a byobu-classroom, that depends on byobu-web
<kirkland> SpamapS: which sets up the one-writer, many readers classroom mode of byobu (previously called "screenbin")
<SpamapS> kirkland: isn't that all still just one machine?
<SpamapS> niemeyer: btw, how is config settings doing? Any branch we can help test out or anything? At this point it is being cited as the solution for a lot of stuff. ;)
<niemeyer> SpamapS: bcsaller would be the right person to report on this
<SpamapS> bcsaller: what it is brother? Got the 411? :)
<kirkland> niemeyer: and is this correct?
<kirkland> provides:
<kirkland>   website:
<kirkland>     interface: https
<bcsaller> SpamapS: I have a branch thats working, I can effect changes on working ec2 deployments with it, however there are some issues with the testing of the branch since I tried merging it with trunk. Partly but not completely resolved. Tests somewhere are bleeding setup and changing the outcome depending on execution order
<niemeyer> kirkland: The interface is actually an Ensemble level interface
<niemeyer> kirkland: Rather than the protocol itself
<bcsaller> SpamapS: Kapil did had some comments on the branch that needed some changes as well and I am working on those now, but its pretty close 
<SpamapS> bcsaller: sounds fun. :-P alright, well color me interested in testing ASAP. Lots of settings to add. :)
<niemeyer> kirkland: We still have to settle on a good set of those
<bcsaller> SpamapS: great, thanks
<kirkland> niemeyer: okay;  so what should it be in my case, right now?
<niemeyer> kirkland: I think "url" would be a good name, for instance, for something that provides _only_ a URL
<SpamapS> website has been the standard name for the relation which provides you with "how to access me" .. though right now it just spits back IP ... eventually I think it should provide status url(s) and possibly the desired canonical hostname.
<niemeyer> kirkland: Even though, perhaps it's a good idea to make that more specific
<niemeyer> kirkland: Not sure.. go with "http" for the moment I guess
<kirkland> niemeyer: k
<niemeyer> kirkland: (not https.. anyone handling http will most likely handle both)
<kirkland> niemeyer: k
<kirkland> SpamapS: for your IP=...
<kirkland> SpamapS: what about: ip=$(wget -q -O- http://169.254.169.254/latest/meta-data/public-ipv4)
<niemeyer> SpamapS: That's an interesting idea
<niemeyer> kirkland, SpamapS: +1 on website
<kirkland> niemeyer: okay, so change http to "website" then
<SpamapS> niemeyer: err.. https and http are vastly different protocols. ;)
<SpamapS> kirkland: won't work on lxc
<kirkland> SpamapS: hrm
<SpamapS> kirkland: and thats not necessarily the IP we want.. since a load balancer should use the internal IP
<SpamapS> kirkland: been bouncing around the idea in my head of a provider-agnostic "machine info" script that would be part of ensemble.
<kirkland> SpamapS: facter ;-)
#ubuntu-ensemble 2011-06-15
<kirkland> SpamapS: plus metadata
<SpamapS> does facter know the ec2 public hostname? thats the tough one to get right now
<kirkland> SpamapS: it's really easy
<kirkland> SpamapS: wget -q -O- http://169.254.169.254/latest/meta-data/public-hostname
<kirkland> SpamapS: if that fails, then use hostname -s
<niemeyer> SpamapS: Heh
<niemeyer> SpamapS: https happens to be a superset of http
<kirkland> SpamapS: http://paste.ubuntu.com/626933/
<kirkland> niemeyer: hmm, i'm not sure about that ... try https://ec2-50-17-174-12.compute-1.amazonaws.com:80
<kirkland> niemeyer: ie, point https to a port that talks http
<SpamapS> niemeyer: yes, I can see where you could also just add 'ssl=1' to the http interface.. 
<kirkland> niemeyer: and the browser is unhappy
<niemeyer> kirkland: Ok, my apologies for the imprecise wording...
<niemeyer> HTTPS is merely an encryption wrapping on top of HTTP itself
<kirkland> niemeyer: oh, sorry, i wasn't trying to be an ass;  i just thought for a second, "whoa, that's neat, I didn't know that!"  then tried it, and it didn't work :-)
<kirkland> niemeyer: okay, cool
<SpamapS> kirkland: sure, I'd like to abstract that into a tool that is like facter, but not as clunky as facter. :)
<niemeyer> But either way, website is a much better term
<kirkland> SpamapS: +1 from me
<SpamapS> kirkland: I was reading facter, and one thing it gets really, really wrong, is that it only runs one process at a time.
<kirkland> SpamapS: i posted something like this to ubuntu-devel@, if you can to comment there
<niemeyer> and "http" would lead to all kinds of confusion regarding protocol vs. relation..
<niemeyer> So I stand corrected.
<SpamapS> kirkland: it runs 100+ external processes, but waits on each one before moving to the next.
<kirkland> SpamapS: i got *reamed* in #ubuntu-devel today, though
<SpamapS> kirkland: eh?
<kirkland> niemeyer: okay, i've changed it to website
<SpamapS> kirkland: I saw your post to the ml ..
<kirkland> SpamapS: see today's log, between me, keybuk, cjwatson, and friends
<kirkland> SpamapS: they have a few good points
<kirkland> SpamapS: but still, i think we could use something, if it's named and documented appropriately
<kirkland> bzr: ERROR: Invalid url supplied to transport: "lp:~kirkland/principia/oneiric/byobu-web/trunk": No such source package byobu-web.
<kirkland> SpamapS: did you warn me about this yesterday? 
<SpamapS> kirkland: no, but its a known problem
<SpamapS> kirkland: and being worked on
<kirkland> SpamapS: okay... what's the solution?
<SpamapS> kirkland: for now you have to use +junk, or upload a package naemd 'byobu-web' to a PPA somewhere
<kirkland> SpamapS: hah, seriously?
<kirkland> :-)
<SpamapS> kirkland: re the conversation w/ Keybuk .. his point about DNS is *quite* good.
<SpamapS> we could just simply stop passing around IP
<SpamapS> The good thing about using DNS is it can be contextualized for the requester...
<kirkland> SpamapS: mokay
<SpamapS> It was glossed over, but I see the point
<niemeyer> Dinner time!
<_mup_> ensemble/expose-watch-exposed-flag r245 committed by jim.baker@canonical.com
<_mup_> Fixed tests with respect to watch_exposed_flag using its callback on its first value, instead of yielding it
<_mup_> ensemble/expose-watch-exposed-flag r246 committed by jim.baker@canonical.com
<_mup_> PEP8
<_mup_> ensemble/expose-watch-exposed-flag r248 committed by jim.baker@canonical.com
<_mup_> Better comments
<_mup_> ensemble/expose-provision-service-hierarchy r293 committed by jim.baker@canonical.com
<_mup_> Merged upstream expose-watch-exposed-flag
<kirkland> SpamapS: hmm, so my relation-set items are not showing up in 'ensemble status'
<kirkland> No ENSEMBLE_AGENT_SOCKET/-s option found
<SpamapS> relation-set shouldn't show in status unless a new change was made
<SpamapS> that error though, means that the environment is screwed up somehow
<SpamapS> kirkland: is your formula in a branch I can look at?
<kirkland> SpamapS: lp:~kirkland/+junk/byobu-web
<kirkland> SpamapS: thanks for your review and all your help
<SpamapS> kirkland: no problem.. its fun. :)
<kirkland> SpamapS: ;-)
<SpamapS> kirkland: are you running that via debug-hooks ?
<kirkland> SpamapS: um, no
<SpamapS> kirkland: where are you seeing the error about ENSEMBLE_AGENT_SOCKET?
<kirkland> SpamapS: oh, i just tried to run the relation-set by hand
<SpamapS> kirkland: right, that won't ever work. :)
<kirkland> SpamapS: heh
<SpamapS> kirkland: I mean, you can make it work but you need to be on the machine, the socket is for talking to the local unit agent
<SpamapS> kirkland: thats why debug-hooks is so pimp
<kirkland> SpamapS: then that should probably be moved to /usr/lib/ensemble
<SpamapS> kirkland: tried it yet?
<kirkland> SpamapS: if it's not meant be run by hand
<SpamapS> kirkland: not really, you can run it in a cron job.. :)
<kirkland> SpamapS: no, how do i do that?
<SpamapS> ensemble debug-hooks service/0
<SpamapS> you MIGHT recognize the interface.. ;)
<SpamapS> do that, then relate something to it
<kirkland> SpamapS: nice ;-)
<kirkland> SpamapS: hmm, okay, i'm missing something, conceptually here
<SpamapS> kirkland: about to wrap up and go join the cloudcamp pre-camp cocktail party. :)
<SpamapS> kirkland: whats up?
<kirkland> SpamapS: meh, it can wait
<kirkland> SpamapS: perhaps you can help me more tomorrow
<SpamapS> kirkland: I've got 5 more minutes
<SpamapS> alright.. I'm out.. good luck. :)
<kirkland> just wondering if anyone's around for some more formula help
<kirkland> okay, i've been beating my head against the wall for a while now and I'm just now realizing
<kirkland> that I'm changing the formula in my local repository
<kirkland> and then i'm trying to deploy it
<kirkland> but the modified formula isn't getting deployed
<_mup_> Bug #797493 was filed: unresolved revision %r in upgrade-formula error message <Ensemble:New> < https://launchpad.net/bugs/797493 >
<kirkland> i'm getting a lot of these:
<kirkland> 2011-06-15 02:50:48,534: hook.output@ERROR: debconf: unable to initialize frontend: Readline
<kirkland> niemeyer: i'm disappointed to see byobu pruned from head :-(
<kirkland> niemeyer: for tmux
<kirkland> niemeyer: i see an obscure note about "race" conditions, but no other explanation
<niemeyer> kirkland: We didn't intend to exclude byobu specifically, of course
<niemeyer> kirkland: We replaced screen
<niemeyer> kirkland: I did, actually
<kirkland> niemeyer: right, bzr-blame told me :-)
<niemeyer> kirkland: Because tmux behaves in a better way when facing concurrent executions
<niemeyer> kirkland: If we fix that, we can bring screen back
<kirkland> niemeyer: so I'd love to help you fix that so that ensemble can use screen/byobu for this
<kirkland> niemeyer: by all means, I'm at your disposal
<niemeyer> kirkland: Sounds good.. I have no special attachment to either in this case
<kirkland> niemeyer: right-o;  I sorta do :-)
<niemeyer> kirkland: Used screen for a long time.. have been experimenting with tmux lately.. can use either without worries
<niemeyer> kirkland: Yeah, I know ;)
<kirkland> niemeyer: it doesn't have to be now, but can you explain to me your concurrency problems?
<niemeyer> kirkland: Will be happy to put your stuff to crank :)
<niemeyer> kirkland: We just need to sort the race
<niemeyer> kirkland: It's pretty simple to explain, actually
<kirkland> niemeyer: and i'll gladly write a custom configuration file for ensemble-debug-hooks-byobu
<niemeyer> kirkland: Given a session name, screen offers no way that I could perceive to avoid the same user from potentially getting two different sessions started
<kirkland> niemeyer: as I'm using it now, I can see the things I want to know about a system when I'm logged into it through debug
<kirkland> niemeyer: do you want 1 session, or 2?
<niemeyer> kirkland: The first
<kirkland> niemeyer: okay, so you want 1 session, call it "ensemble-debug" or something, right?
<niemeyer> kirkland: For both
<kirkland> niemeyer: okay, and you also want to attach to that one and only one session?
<niemeyer> kirkland: Yes, just a non-racy way to create a named session
<niemeyer> kirkland: If the same user executes the same command twice in different terminals, the command should fail or succeed, but it should never yield two different sessions
<niemeyer> kirkland: We can't _join_ the session, though..
<niemeyer> kirkland: Simply new-session + exec command in new session, without races
<kirkland> niemeyer: to reproduce the problem you're experience in screen ...
<kirkland> niemeyer: i'm going to create a screen session with "screen -S debug"
<kirkland> niemeyer: okay, now I have a session
<kirkland> niemeyer: called 'debug'
<kirkland> niemeyer: and now I want to run a command in that sesion
<niemeyer> Ok
<niemeyer> kirkland: You want to create it if it doesn't exist, and run it in the single session
<kirkland> niemeyer: okay, using the "at" command
<kirkland> niemeyer: i mean, screen's at
<niemeyer> kirkland: Executing the command isn't the problem.. the problem is creating the session in a non-racy way
<kirkland> niemeyer: byobu -xRS %s-hook-debug -t shell
<kirkland> niemeyer: that's what you were using before, right?
<kirkland> niemeyer: and that's what you found to be racy?
<niemeyer> kirkland: No, that'd join the session
<kirkland> niemeyer: what were you using to create it?
<kirkland> niemeyer: i'm trying to reproduce the race
<niemeyer> kirkland: This is being run from a script.. it can't join the session
<kirkland> niemeyer: right, so you create the session detached
<niemeyer> kirkland: You don't have to reproduce the race.. any solution will do :-)
<kirkland> niemeyer: some people create their byobu session in a cronjob at boot
<kirkland> niemeyer: so that they can launch irssi and use byobu+irssi as their irc proxy
<niemeyer> kirkland: Yeah, that should be fine.. the machine won't boot twice at the same time.
<kirkland> niemeyer: if it does, I want to see it!
<niemeyer> +1! :-)
<kirkland> niemeyer: let me find my emails where I've helped someone do this
<kirkland> niemeyer: it's a permutation of this:
<kirkland>        -d -m   Start screen in "detached" mode. This creates a new session but doesn't attach to it. This is useful for system startup scripts.
<kirkland> niemeyer: one more question... you want this session owned by root, or ubuntu?
<kirkland> niemeyer: i've noticed both your tmux and byobu solutions use sudo to get to a shell
<niemeyer> kirkland: Yeah
<kirkland> niemeyer: i found a minor bug with the way that was being called
<kirkland> niemeyer: which I'll fix as I work on this
<niemeyer> kirkland: What was it?
<kirkland> niemeyer: but I'm curious about the motivation and  design
<niemeyer> kirkland: It's just a script on the server feeding a screen/tmux session
<kirkland> niemeyer: right
<kirkland> niemeyer: but putting the user in a root shell
<kirkland> niemeyer: rather than an ubuntu shell
<niemeyer> kirkland: The race comes from the fact that the session can be started by the server feeding a new shell, or by the user starting the debugging session
<kirkland> niemeyer: is that by design?
<kirkland> niemeyer: sorry, this is aside from the race :-)
<niemeyer> kirkland: There are details, but that's the core idea
<niemeyer> kirkland: It's root because the formula runs as root for good reasons
<niemeyer> kirkland: So to emulate the same environment, we run the hook as root too
<kirkland> niemeyer: okay
<kirkland> niemeyer: fair enough;  that's the explanation i was looking for
<niemeyer> kirkland: Cool.. we should document that there too
<kirkland> niemeyer: so basically, we just need this in /etc/rc.local or an upstart boot script:
<kirkland> byobu -d -m -S byobu.debug bash
<kirkland> niemeyer: which will start a screen+byobu session, in detached mode at boot;  name it "byobu.debug", and run a single window called bash
<niemeyer> [niemeyer@gopher ~]% screen -d -m -S byobu.debug bash
<niemeyer> [niemeyer@gopher ~]% screen -d -m -S byobu.debug bash
<niemeyer> [niemeyer@gopher ~]% screen -ls
<niemeyer> There are screens on:
<niemeyer>         20923.byobu.debug       (06/15/2011 01:53:33 AM)        (Detached)
<niemeyer>         20792.byobu.debug       (06/15/2011 01:53:31 AM)        (Detached)
<kirkland> niemeyer: obviously we can doctor this up a bit
<kirkland> niemeyer: right, but you'd do that on boot, just once, no?
<niemeyer> kirkland: This is done withing the script which starts the debugging session
<niemeyer> kirkland: and by the user's ssh
<kirkland> niemeyer: okay, so my first thought, what i've been working on here, is to start the screen debugging session at boot;  and then attach to it when/if the user decides to go into the debug_hook mode
<niemeyer> kirkland: This has to be started by the script that starts the debugging session, or the user ssh shell, whatever comes up first
<niemeyer> kirkland: This session is terminated when the user leaves the debugging sessoin
<kirkland> niemeyer: okay
<kirkland> niemeyer: so the trigger is when the user starts the debug session
<niemeyer> kirkland: Yeah.. the problem is really how to start two sessions without races
<kirkland> niemeyer: and if the user ssh's in (outside of the debug hook), do you want them in the debug session or not?
<niemeyer> Erm..
<niemeyer> a single session without races
<niemeyer> kirkland: A single session with the given name should be created
<niemeyer> kirkland: In tmux, that's "tmux new-session"
<niemeyer> kirkland: It prevents the same session from being started multiple times
<kirkland> niemeyer: hmm, perhaps still racy, but what about:
<kirkland> niemeyer: byobu -r byobu.debug || byobu -S byobu.debug
<kirkland> niemeyer: so reattach, if possible; if not, create
<niemeyer> kirkland: Yeah the window even has a nice shape in the middle ;-)
<kirkland> niemeyer: let me dig through the screen source code
<niemeyer> kirkland: Ok.. it's very late here.. I'll really have to step out to bed now
<kirkland> niemeyer: sure;  I am *going* to solve this for you ;-)
<niemeyer> kirkland: but please let me know what you find.. we can switch back if there's an elegant way to do this in screen
<niemeyer> kirkland: Awesome, thanks :-)
<niemeyer> Night!
<kirkland> niemeyer: night
 * hazmat catches up on the night discussion
<hazmat> if anyone has a moment to approve a trivial.. https://pastebin.canonical.com/48502/
<hazmat> which fixes bug 797493
<_mup_> Bug #797493: unresolved revision %r in upgrade-formula error message <Ensemble:New> < https://launchpad.net/bugs/797493 >
<_mup_> Bug #797696 was filed: FAQ page needs updating <Ensemble:New for kim0> < https://launchpad.net/bugs/797696 >
 * niemeyer waves
<niemeyer> hmm.. compiz update.. let's hope it fixes some of the stability issues.
<hazmat> niemeyer, indeed
<hazmat> niemeyer, if you haz a moment to approve a trivial.. https://pastebin.canonical.com/48502/
<niemeyer> hazmat: Hey man
<niemeyer> Looking
<hazmat> its a fix for the upgrade reporting output in bug 797493
<_mup_> Bug #797493: unresolved revision %r in upgrade-formula error message <Ensemble:New> < https://launchpad.net/bugs/797493 >
<niemeyer> hazmat: +1!
<hazmat> niemeyer, just looking over the google lang performance.. i hadn't realized how much slower golang was
<niemeyer> hazmat: How much slower than what?
<hazmat> niemeyer, https://days2011.scala-lang.org/sites/days2011/files/ws3-1-Hundt.pdf
<hazmat> java, c++, scala are the comparisons
<niemeyer> hazmat: Ah, yeah, I've seen that paper
<niemeyer> hazmat: Go is slower than hugely optimized C++, no questions about that :-)
<hazmat> what was surprising was how well the naive scala did
<niemeyer> hazmat: Also note that there's really no "Go Pro" in the paper
<hazmat> niemeyer, right their just doing optimizations a go 'pro' developer would do
<niemeyer> hazmat: The person doing the optimization reportedly didn't spend much time on it
<hazmat> ah
<niemeyer> hazmat: and was surprised when the paper was published
<hazmat> niemeyer, i thought they solicited robP for feedback on the pro stuff
<niemeyer> hazmat: No, it was Ian Tailor, the guy from gccgo
<niemeyer> Taylor even
<niemeyer> hazmat: http://j.mp/lwuiAv 
<m_3> will the bootstrap instance be unhappy if: A) I restart zookeeper on a node during install, and B) zookeeper comes back up on that node with a sun-java6-jdk instead of openjdk?
<m_3> (trying to install sun java as part of a formula)
<niemeyer> m_3: We haven't exercised much that kind of scenario yet
<m_3> I'd rather do a update-alternatives which would probably effect zookeeper
<niemeyer> m_3: Restarting ZooKeeper is supposed to work fine, and it's not hard even
<m_3> I'll experiment...
<niemeyer> m_3: We know about a few issues that may complicate it right now
<niemeyer> Specifically, we get a spurious event about the session that we're not handling properly yet
<niemeyer> m_3: The event has to be ignored..
<kim0> hey guys, is the multiple formulas per machine feature landing soonish ? Also is this the same as, machine policy formulas or whatever its called
<m_3> niemeyer: is there a way to mute events from that node before I do a zk restart?
<niemeyer> m_3: mute?
<niemeyer> m_3: What kind of event?
<m_3> sorry, don't know the txzk infterface...
<niemeyer> m_3: No worries.. just trying to figure what do you really mean
<m_3> is there a way to warn the bootstrap instance before bouncing zk on a node
<niemeyer> kim0: We're not working on it right now, so it won't land _soon_, but we don't want to take too long either
<niemeyer> m_3: I still don't get what you mean.. it sounds a bit reversed..
<m_3> ...so it can ignore the bad session events you mentioned
<niemeyer> m_3: zk is _inside_ the bootstrap node
<m_3> niemeyer: if I bounce the zk instance running on the new node you said the bootstrap instance gets spurious session events that it doesn't like
<m_3> niemeyer: so is there a way for the new node to warn the bootstrap instance about the pending bounce of zk on the new node
<niemeyer> m_3: Right now there's only a single zk instance, and it is inside the bootstrap node
<m_3> niemeyer: oh wow... 
<m_3> niemeyer: I see zk instance running on new node... let me make sure I'm on the right instance
<m_3> niemeyer: yeah, running on both
<niemeyer> m_3: That's likely an artifact of installing the package on the second instance
<niemeyer> m_3: Can you please investigate how zookeeper is being started on the second instance?
<niemeyer> m_3: It's certainly not being used
<m_3> niemeyer: ok, will do... thanks
<hazmat> niemeyer, its because its part of the install image
<hazmat> niemeyer, we should probably just go ahead and install java, but not zookeeper, and let cloud-init install zk
<hazmat> that will remove the extraneous zk
<hazmat> on the service machines
<niemeyer> hazmat: You mean it's indeed an artifact of installing the package?
<hazmat> niemeyer, yes
<niemeyer> hazmat: Ok, hmm
<hazmat> niemeyer, we don't need it on the service machines, just the python zk extension and libzk
<niemeyer> hazmat: Your suggestion feels good
<niemeyer> hazmat: I was going to suggest we disable starting the service, but indeed there's no reason for the package to even exist there
<hazmat> niemeyer, has upstart grown a nice way to do that?
<hazmat> my understanding was you need to do some sort of override file that explicitly depended on a nonexistant event
<niemeyer> hazmat: Not sure.. I recall hearing that it would.  Either way, several services check /etc/default/* for info on whether to start or not 
<hazmat> hmm.. true
<m_3> niemeyer, hazmat:  thanks y'all
<hazmat> it looks like the other way (via upstart in natty) is echo manual >> /etc/init/zookeeper.override
<hazmat> niemeyer, i think ben's is blocked on bleeding test failures in one of the service lifecycle integration branches.. 
<hazmat> i had a quick look on monday.. but its going to take some more time to dig into.
<_mup_> ensemble/trunk r254 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] fix formatting for upgrade error [r=niemeyer][f=797493]
<niemeyer> hazmat: Ok.. it'd be good to have more of that conversation happening in the open
<niemeyer> hazmat: So that we can get a feeling of what to do about it.. I'd really like to make our tests rock solid in this milestone
<niemeyer> hazmat: and not rely on any kind of timing
<hazmat> niemeyer, this isn't a timing issue, the its some sort of bleeding issue.. the tests run fine looped in isolation, but full test runs are busted
<hazmat> i suspect its the reset_logging/save_logging stuff.. because its overly ambitious in the reset, which cascades to others
<hazmat> which depend on the default
<niemeyer> hazmat: Have you seen Jim's branch?
<hazmat> at least that was the case last time around
<hazmat> niemeyer, yeah.. i've discussed it with him, there are more timing issues there.. but i think the problem is having async cb chained call stacks without an explicit control point
<hazmat> ie. more of a design issue there
<hazmat> i've also expressed i'd prefer to see that encapsulated better so that refactoring it to the machine agent is a bit easier
<niemeyer> hazmat: One of the things there is that it kills reset and save_logging entirely
<hazmat> niemeyer, ah that branch
<hazmat> i haven't looked at it.. that might certainly help
<_mup_> ensemble/bootstrap-shutdown-environment r250 committed by kapil.thangavelu@canonical.com
<_mup_> note the shutdown prompt in the user guide
<_mup_> ensemble/trunk r255 committed by kapil.thangavelu@canonical.com
<_mup_> merge bootstrap-shutdown-environment [r=bcsaller,niemeyer][f=756685]
<_mup_> Ensemble bootstrap and shutdown subcommands now operate on a single
<_mup_> environment. The shutdown command also requires a user confirmation
<_mup_> before destroying the environment.
<kim0> The instructions to run from ppa https://ensemble.ubuntu.com/docs/getting-started.html#running-from-ppa
<kim0> are missing a apt-get update, does this require filing a bug/branch, or is there a simpler way
<hazmat> kim0, ugh.. mea culpa.. a trivial for that sounds good
<kim0> hazmat: what's the workflow for a trivial fix
<hazmat> kim0, change the files, paste a diff, ask for an +1 on the trivial
<hazmat> kim0, after receiving such, commit the diff to trunk with a [trivial] tag in the commit msg
<hazmat> at least that's the workflow i've been following for such
<kim0> hazmat: http://paste.ubuntu.com/627384/
<hazmat> kim0, looks good, +1
<hazmat> ;-)
<niemeyer> hazmat: +1 on the workflow.. sometimes I cowboy extremely obvious things, but I've also found useful comments from others on things I initially found obvious
<kim0> merged
<kim0> I'd like to contact Iain Farrel from design team, to ask for an Ensemble mascot. Now is probably a good time for one :) any +1/-1s
<hazmat> kim0, sounds great
<jimbaker> hazmat, is there still a timing issue in the standardize-log-testing branch?
<hazmat> jimbaker, checking.. un momento
<kirkland> hey guys -- any feedback on my bash-completion branch?  :-)
<jimbaker> hazmat, it should be fixing a problem with HookContext that's causing this "bleeding"
<jimbaker> i believe the save_logging/reset_logging simply were masking this issue
<jimbaker> kirkland, i like the idea of completion. but i did wonder, what sort of testing is typically done around stuff like this?
<kirkland> jimbaker: ensemble<space><tab><tab>
<kirkland> jimbaker: the framework is there for each subcommand to have completion too
<kirkland> jimbaker: i tried grepping "ensemble status" for that, but it's too slow
<kirkland> jimbaker: i think i'd need to cache that output
<kirkland> jimbaker: from a user experience perspective
<niemeyer> kirkland: Sorry, it wasn't in my radar.. we generally use this view for reviews:
<niemeyer> kirkland: https://ensemble.ubuntu.com/kanban/dublin.html
<jimbaker> kirkland, yeah, that makes sense re ensemble status being too slow
<niemeyer> kirkland: But it only gets there if there's a bug linked, in progress, and attached to the milestone
<niemeyer> kirkland: I'll check it out today
<niemeyer> kirkland: I think completion on status wouldn't be ideal indeed
<niemeyer> kirkland: I'd rather not complete than wait
<hazmat> kirkland, are you saying that grepping ensemble status -h is too slow for completion?
<hazmat> it shouldn't be doing any work before it has parsed the cli
<kirkland> niemeyer: right, i just tried it lightly, and it was really bad performance
<jimbaker> my assumption was that kirkland was actually getting names from ensemble status...
<kirkland> niemeyer: but getting the initial actions is really fast
<kirkland> niemeyer: and it keeps me from having to go back to ensemble --help all the time to find the right commands
<kirkland> jimbaker: no, not doing that
<jimbaker> kirkland, ok
<kirkland> hazmat: no, not -h
<hazmat> kirkland, we could probably give a an additional option to status to make if faster for completion
<hazmat> its pulling in almost the whole subgraph atm, but really we just need one node
<hazmat> i take that back.. the connection setup would still dominate, we do need a local cache for completion
<_mup_> ensemble/bash-completion r257 committed by bcsaller@gmail.com
<_mup_> bash_completion support with some argument support
<hazmat> assuming that is we were completing actual variable/value names
<kirkland> hazmat: yeah, i was thinking something like that;  caching status to disk;  backgrounding an update of the cache when it's deemed expired
<kirkland> hazmat: anyway, it's not urgent, but would make for a damn cool interface :-)
<jimbaker> kirkland, your background process could simply listen to topology changes like any other zk client can
<jimbaker> in terms of watching a given node
<kirkland> jimbaker: cool;
<jimbaker> probably better to do that way than using ensemble status (except for spiking)
<kirkland> jimbaker: hazmat: niemeyer: in any case, we now have the framework for the bash-completion piece;  if and when we can get ensemble status instantly (or at least comb a cache of ensemble status), I'll extend the bash-completion for subcommands to leverage the status info
<niemeyer> kirkland: Sounds good
<niemeyer> kirkland: Completion helps for sure
<hazmat> kirkland, we don't have any client side daemon ... its not clear to me what would do the background update, perhaps just invoking the cli again in the background with a cache option.. although having a daemon might be nice with the ssh tunnel reuse
<kirkland> hazmat: a daemon would be one approach
<kirkland> hazmat: another would be in the bash-completion command itself
<kirkland> hazmat: in there, i'd test the age of the cache
<hazmat> kirkland, indeed.. yeah. we'd need to show stale data in that case afaics
<kirkland> hazmat: if it's not expired, then i'd use it;  and background an update
<hazmat> or get random waits
<kirkland> hazmat: if it is expired; i'd probably just return "null" immediately (ie, no complete) and background an update
<kirkland> hazmat: where an update is something like 'ensemble status > $HOME/.cache/ensemble-status &'
<SpamapS> FYI, I just updated the builder recipes for ensemble to make the bzr revno part of the upstream version.
<SpamapS> hazmat: a local daemon that does the ssh tunnel and listens to zookeeper updates for all status info would be doable, would it not?
<SpamapS> hazmat: we could even leverage dbus and have it spun up when needed.
<niemeyer> Lunch time!
<kirkland> SpamapS: do you mind trying out bzr+ssh://bazaar.launchpad.net/~kirkland/%2Bjunk/byobu-web/ ?
<jimbaker> wouldn't such a status daemon also be useful for an indicator?
<SpamapS> kirkland: bootstrapping.. :)
<SpamapS> kirkland: yesterday you were saying you felt like you were missing a concept.. did you figure it out?
<kirkland> SpamapS: sort of
<kirkland> SpamapS: i think it was around bumping the formula revision number
<kirkland> SpamapS: and pushing an upgrade
<kirkland> SpamapS: i spent 4 hours making changes locally
<kirkland> SpamapS: but not seeing those in my deployments
<kirkland> (okay, 4 is an exaggeration :-)
<SpamapS> DOH I hate that
<SpamapS> I still am pretty convinced that users will despise the revno too and just wonder why we don't use a hash of the zip file or something.
<kirkland> SpamapS: i feel like if my --repository=./ then obviously i want it upgraded every time
<kirkland> SpamapS: agreed;  if we have to use a revno, i want it bumped automatically for me
<kirkland> SpamapS: i don't want to think about tht
<SpamapS> kirkland: I spent like, 30 seconds trying to write a script to do that automatically, but got annoyed with the yaml libraries available. ;)
<kirkland> SpamapS: how good or bad is it/
<hazmat> maybe a --force on upgrade, to bypass the version checking
<kirkland> hazmat: sure, i'd use that (probably all the time :-)
<hazmat> looks pretty straightforward to implement
<kirkland> SpamapS: the only tricky part is that you have to set a password for ubuntu for the formula to be useful
<kirkland> SpamapS: so i'm keen on finding a way of doing that, before blogging about it :-)
<hazmat> SpamapS, we could use a md5 checksum for verifying instead of numbers, but we're playing pretty fast an loose with version numbers at that point
<SpamapS> kirkland: right thats where config settings come.
<SpamapS> hazmat: we've talked about this before.. --force is a nice band-aid. Users will *still* hate the revno. ;)
<kirkland> SpamapS: right, unimplemented at this point, no?
<SpamapS> principle of least surprise: I changed my formula, it should deploy anew.
<SpamapS> At least warn me that you didn't deploy the same content as whats on disk locally.
<SpamapS> kirkland: --force is unimplemented, as is the config settings (not sure which you were asking about)
<kirkland> SpamapS: config
<kirkland> SpamapS: i'm looking for the best work around for setting an ubuntu password
<kirkland> SpamapS: to a configurable value
<kirkland> SpamapS: or randomly generating it
<kirkland> SpamapS: and getting that back to the user
<kirkland> SpamapS: perhaps as a relation?
<SpamapS> kirkland: randomly generate it, and log it with ensemble-log INFO level
<SpamapS> ensemble-log --log-level INFO password=f94k1X
<SpamapS> kirkland: I suggest pwgen for the generation btw.
<SpamapS> kirkland: ajaxterm runs its own HTTP daemon right?
<kirkland> SpamapS: yeah, on 8022
<kirkland> SpamapS: which doesn't need to be opened to the world
<kirkland> SpamapS: just to itself
<SpamapS> kirkland: wondering if you could do some kind of SSO thing
<kirkland> SpamapS: hmm
<kirkland> SpamapS: openid-ish you mean?
<kirkland> SpamapS: that would be so sexy ....
<SpamapS> Wordpress has an openid provider plugin.. >:)
<SpamapS> kirkland: I was going to suggest Launchpad.. but how do you know which groups to accept? ;)
<Daviey> anyone who has signed the CoC is good cookie :)
<kirkland> SpamapS: i figured you'd just add your LP id as part of the config
<kirkland> SpamapS: which could be a group, maybe
<SpamapS> Daviey: hm that reminds me I should add that as a requirement for contributing to principia.
<Daviey> oh aye
<kim0> I'm collecting a list of weekly happenings in the Ensemble space, preparing for a blog post
<kim0> so far, I've collcted these http://paste.ubuntu.com/627516/
<kim0> feel free to add any comments on any points, and add new points (perhaps recent development) that I'm not aware of
<SpamapS> kim0: In addition to point 1, principia-tools has a script, 'promulgate', that assigns a branch as the official branch for a formula.
<kim0> SpamapS: so a branch like lp:~kim0/+junk/drupal could now be marked as official branch for lp:principia/drupal right ?
<kim0> cool
<kim0> niemeyer: hazmat if anyone has any development related points I should add .. let me know .. thanks
<niemeyer> kim0: The debug-hooks fixes have landed
<kim0> got it
<niemeyer> kim0: Btw, "source package branches" is an internal implementation detail rather than something that would make sense to advertise as such
<niemeyer> kim0: I.e. principia has formulas, not source packages
<kim0> yeah, this part I found confusing to understand
<kim0> so I'll just say that formulas can now be referenced like lp:principia/mediawiki
<niemeyer> kim0: It's an internal Launchpad detail.. we're reusing existing infrastructure because it's similar to what we need
<niemeyer> kim0: The key point there is that principia is moving forward, and we not have means for tagging officially recommended branches
<niemeyer> kim0: s/and we not/and we now/
<kim0> awesome, I'll note that as well
<kim0> daker: o/
<daker> kim0: sorry i need to leave the office :/
<kim0> daker: oh still at the office .. okie ..catch you at home :)
<daker> :)
<SpamapS> kim0: hopefully nobody will link +junk branches though. :)
<kim0> haha :D
<kim0> yeah bad example 
<niemeyer> +1 :-)
<koolhead17> hi all
<koolhead17> hello kim0 niemeyer 
<niemeyer> Hey koolhead17 
<kim0> koolhead17: hey hey
<koolhead17> Daviey: hazmat and all :)
<kim0> koolhead17: how's the ec2 playing going
<koolhead17> kim0: having fun time :)
<koolhead17> thanks 2 you :)
<kim0> awesome
<koolhead17> i make sure to run ensemble shutdown :D
<kim0> hehe :)
<koolhead17> niemeyer: kim0 i have a question
<kim0> shoot
<koolhead17> the package which is not available via apt-get
<koolhead17> can i write formula for that?
<kim0> SpamapS: would kick you though :)
<hazmat> koolhead17, yes.. you can download and install from anywhere you want in the 'install' hook, however
<kim0> I think the discussion for that is ongoing
 * koolhead17 looks at SpamapS :)
<koolhead17> hazmat: yeah that`s what i wanted the permission for :P
<hazmat> such formulas probably won't be accepted for an official release of principia.. but they can be installed easily from the command line with a different namespace prefix
<kim0> my understanding is, ofc you can, but it won't be a blessed formula
<koolhead17> kim0:  blessed formula :D
<hazmat> ie ensemble deploy koolhead17:wordpress 
<koolhead17> hazmat: yes i saw and checked that via EC2
<jimbaker> bcsaller, hazmat - were you able to see if standardize-log-testing branch was helpful for looping problems?
<bcsaller> jimbaker: I haven't merged it, you found an issue with the hook context?
<jimbaker> bcsaller, yes, it was using processExited instead of processEnded
<jimbaker> i think the save_logging etc stuff was masking this
<jimbaker> bcsaller, so it may be helpful for your work, based on what hazmat was saying earlier today
<bcsaller> I'll take a look, thanks
<niemeyer> kim0: That's right.. it's fine to do it, it just won't be part of the core formulas
<kim0> although if the only concern is getting security updates, I'm not sure why we can't just request the formula author to add a suitable update mechanism
<kim0> like for drupal/drush "drush upc" would update drupal-core and all modules .. many modules are not even packaged
<SpamapS> hazmat: should we keep pushing txaws trunk into the ensemble PPA, or backport the oneiric version?
<hazmat> SpamapS, hmmm.. i think we're okay with trunk for the foreseeable future ... if we do make a fix to txaws, we want the ppa to capture that
<hazmat> and most of what's happened there is just bug fixing
<SpamapS> hazmat: at some point we need to make a stable PPA
<SpamapS> hazmat: once we get past beta1 of oneiric maybe..
<hazmat> SpamapS, definitely
<SpamapS> hazmat: So that people can get teh 11.10 version of ensemble on natty/maverick/lucid
<niemeyer> kim0: In that case, it won't be an official formula.. we can't enforce how the user will handle security updates
<niemeyer> kim0: We can only offer the mechanisms
<kim0> but we're already enforcing, we're saying if you dont use apt-get you wont be accepted in official
<SpamapS> kim0: practically, drush upc is meant to do the same thing. But we don't want to take responsibility for that.
<kim0> the formula author takes responsibility
<SpamapS> Hence it not being "official"
<kim0> We should only verify that the update mechannism works
<SpamapS> Official means "we take responsibility for it working a certain way"
<SpamapS> We don't know their security update process, we don't know how long they'll support a version.. also what if they break compatiblity with other software in a version that also fixes security? We don't do that.
<SpamapS> kim0: It will be available, but like the thread was saying, it won't be quite as automatic to get it.
<kim0> would users be able to query non official repos for all drupal formulas ?
<kim0> perhaps we should allow user-rating and commenting, a la android market
<kim0> if user formulas (non official) are not searchable/discoverable .. that pretty much kills them though
<SpamapS> No they'll show up in searches.. with a namespace that identifies them clearly as "not official"
<SpamapS> I like the idea of "contrib" .. "These are useful formulas but not part of Ubuntu"
<SpamapS> kim0: I actually think these formulas are the killer app of ensemble.. and making the distinction isn't going to kill them or harm them in any way. It just signals to users what risk is involved.
<niemeyer> kim0: We're not enforcing, in the same way we don't enforce what people put in PPAs
<SpamapS> True, we're still speculating on what we want 'official' to mean.
<niemeyer> kim0: There's a significant difference between "you can't create a package" and "your package is not part of main"
<niemeyer> kim0: Most people don't care that their packages are not part of main
<kim0> SpamapS: cool, as long as they show up in searches .. that's lovely :)
<kim0> yeah
<daker> back
<daker> niemeyer: license question
<niemeyer> daker: Sure
<daker> my formula is based on the wp one, what should i do ?
<SpamapS> daker: merge the latest copyright file from the wordpress formula, and then add your own copyright to it where appropriate.
<SpamapS> daker: http://bazaar.launchpad.net/~ensemble-composers/principia/oneiric/wordpress/trunk/view/head:/copyright
<daker> SpamapS: can you give an example on how my copyright should be ?
<SpamapS> daker: on the next line after Copyright: Copyright 2011, Canonical Ltd., All Rights Reserved.
<SpamapS> actually
<SpamapS> no, add a Files: section.. 
<SpamapS> Files: hooks/db-relation-changed
<SpamapS> Copyright: 2011, Canonical Ltd., All Rights Reserved
<SpamapS>   2011, YOUR NAME <your@email.com>
<SpamapS> License: GPL-3
<SpamapS> That should do it
<daker> ok
<daker> SpamapS: thanks
<SpamapS> daker: if you totally rewrote hooks/install , you can list it there too
<daker> SpamapS: is it good http://paste.ubuntu.com/627597/ ?
<SpamapS> daker: you do not need to create a Files section for anything you did not change. Just let the 'File: *' handle that
<SpamapS> daker: each Files: section needs a License: portion, but does not have to repeat the text.
<daker> headache :/
<daker> SpamapS: good http://paste.ubuntu.com/627606/ ?
<SpamapS> daker: as I said, you don't have to repeat the text, just the License: GPL-3 part
<SpamapS> daker: the format specification is the first link in that file, it might be helpful
<SpamapS> daker: http://paste.ubuntu.com/627613/
<SpamapS> daker: much simpler, see?
<daker> ah ok
<SpamapS> err, I repeated hooks/install oops ;)
<daker> SpamapS: one last time http://paste.ubuntu.com/627614/
<SpamapS> daker: you are missing the original Files: * section
<SpamapS> daker: which you shouldn't be altering
<daker> SpamapS: it will be copyrighted to ? canonical ?
<daker> or me ?
<SpamapS> daker: You have copyrights on anything you changed by more than a few lines.
<SpamapS> daker: so metadata.yaml you also probably deserve copyright on.
<SpamapS> daker: honestly you can just dump the wordpress copyright file in there, without your copyrights.. I'm more concerned with the license than the copyrights.
<daker> SpamapS: just one more time pls http://paste.ubuntu.com/627620/
<jimbaker> hazmat, i don't think this got through: i've been experimenting with client.connected for watch_expose_flag. this is of course the common pattern for watches
<jimbaker> hazmat, i was trying to determine why my provisioning agent tests would fail after approximately 200 iterations or so. i was under the impression that a single poke i was doing at teardown ensured watches would always run
<hazmat> jimbaker, a poke is just a roundtrip communication, the watch would have to have already fired for a poke to be sync point
<jimbaker> hazmat, right. so i think there's a very small chance that a new watch is getting setup in the  background by another watch as tests terminate
<jimbaker> hazmat, having the watch cb depend on some app state definitely is working, as you suggested
<jimbaker> hazmat, but maybe i can try one more tack here - have the test on app state before each watch setup too
<_mup_> ensemble/expose-provision-service-hierarchy r294 committed by jim.baker@canonical.com
<_mup_> Removed unused code, comments
<jimbaker> hazmat, no, that doesn't work
 * niemeyer breaks
<jimbaker> hazmat, maybe what's going on here is the following: test teardown occurs; as part of that, the zk tree is deleted; but this triggers a watch on exposed flag (because it has been exposed, and in a separate node than the topology)
<hazmat> jimbaker, the tests are supposed to close the client, and open  a new client to tear down the tree
<jimbaker> then the watch runs, but the client is already disconnected
<hazmat> closing the first client, shutsdown extant watches
<jimbaker> hazmat, ok, that theory doesn't work then :)
<hazmat> jimbaker, you might want to verify your tests are inheriting from the state test base which has this behavior
<jimbaker> hazmat, yes it is: class AgentTestBase(StateTestBase) ...
<jimbaker> hazmat, so i wonder if this argues for reinstating the test on client.connected in the watch... putting it there, i looped > 1000 times, at which point i control-c
<jimbaker> i'm controlling the callback on the agent (a new boolean agent._running), but the watch itself doesn't have access before it does exists_and_watch
<hazmat> jimbaker, that
<hazmat> er.. that's fine by me
<jimbaker> hazmat, yes, i know. i think i have spent easily a number of days looking at this question raised by niemeyer  on two lines of code 
<jimbaker> so that's all good, i think we have a good answer for why it's necessary, and the best pattern to write code
<jimbaker> hazmat, 1) have the watch be guarded by client.connected upon entry, as is our common practice; 2) have the callback be guarded on something at the app level
<jimbaker> hazmat, anyway, i'm going to push up that change to expose-watch-exposed-flag
<_mup_> ensemble/expose-watch-exposed-flag r249 committed by jim.baker@canonical.com
<_mup_> Guard entry into the watch with client.connected for watch_exposed_flag (again)
<hazmat> SpamapS, kirkland i took a look at the force upgrade.. it looked pretty easy initially but it ends up violating some assumptions we have that a formula id (namespace:name-revision) is unique in the deployment
<hazmat> jimbaker, the standardize-log-testing didn't resolve the issues on bcsaller's branch.. but it looks good to me
<jimbaker> hazmat, cool about my branch, just too bad it wasn't as powerful as i had hoped :)
<_mup_> ensemble/expose-provision-service-hierarchy r295 committed by jim.baker@canonical.com
<_mup_> Merged upstream expose-watch-exposed-flag
<jimbaker> hazmat, feel free to grab a review of standardize-log-testing if that works for you - want to get it into trunk and need that 2nd review
<hazmat> jimbaker, sounds good
<SpamapS> hazmat: so maybe it would be even easier to just use the hash? ;-)
<hazmat> SpamapS, indeed, it might at that
<SpamapS> hazmat: anyway, thanks for looking into it
<daker> SpamapS: can you give me your opinion http://paste.ubuntu.com/627620/ pls ?
<SpamapS> daker: re your last one.. read it logically (not just based on what I've asked you to do). You have removed Canonical's copyrights and added your own. Thats not appropriate.
<SpamapS> daker: the point of the file is to document what the license and copyright status of each file is. The 'Files: *' means *all files not otherwise specified*
<daker> i am really confused :/
<SpamapS> You should ask a question then, I am happy to answer.
<daker> you said that i have rights on anything i have changed, and i didn't remove Canonical's copyrights
<SpamapS> Files: *
<SpamapS> Copyright: 2011, Adnane Belmadiaf <daker@ubuntu.com>
<SpamapS> License: GPL-3
<SpamapS> No mention of Canonical there
<SpamapS> but canonical retains all of its copyrights.
<SpamapS> Unless you completely replaced most of the file.
<SpamapS> At which point you would have an explicit entry that just lists you.
<daker> SpamapS: so the Files: * section is copyrighted to canonical ?
<_mup_> ensemble/expose-provision-service-hierarchy r296 committed by jim.baker@canonical.com
<_mup_> Doc string
<SpamapS> daker: yes, then you add a "Files" section for each file that has something other than that for its Copyright/License.
<hazmat> jimbaker, one comment on the log stuff merge, the sleep looks a little out of place, else its pretty nice (and mp approved)
<Daviey> Can i confirm that there is no intetion to get ensemble in main for release?
<SpamapS> Daviey: not for 11.10, but I do intend to MIR some of its dependencies.
<_mup_> ensemble/standardize-log-testing r259 committed by jim.baker@canonical.com
<_mup_> Removed spurious sleep (shouldn't have been part of push)
<Daviey> SpamapS: Do you have a list of those?
<Daviey> What is the reasoning for MIR'ing them, if ensemble will remain in universe for this release?
<SpamapS> Daviey: to spread out the load on the MIR team and give us time to get ensemble ready.
<SpamapS> Daviey: essentially, we know we want ensemble in main for 12.04 .. but we also know it probably won't be ready.
<_mup_> ensemble/standardize-log-testing r260 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<Daviey> SpamapS: Agreed.  'Spreading the burden' doesn't make much sense IMO.  We already have to raise enough MIR's for our other commitments, which hasn't been compariable since karmic.
<jimbaker> hazmat, i fixed up that sleep per the commit message, sorry about that!
<jimbaker> i generally try to mark such debug statements with an XXX so they don't sneak in... but forgot this time
<SpamapS> Daviey: Hrm, I figure raise it now because we know we'll need it then.
<hazmat> jimbaker, sounds good
<jimbaker> seeing trunk fail again: http://paste.ubuntu.com/627672/ (just running ./test ensemble.control)
<niemeyer> jimbaker: Are you sure that's up to date?
<niemeyer> jimbaker: Ah
<niemeyer> jimbaker: find -name '*.pyc' -exec rm {} \;
#ubuntu-ensemble 2011-06-16
<jimbaker> niemeyer, that fixes one test, thanks for the tip
<jimbaker> still seeing a problem on ensemble.control.tests.test_upgrade_formula.ControlFormulaUpgradeTest.test_upgrade_formula_service_using_latest
<jimbaker> (was that fixed by the trivial earlier?)
<SpamapS> kirkland: I'm testing deploying all of principia w/o ifconfig
<jimbaker> looks like line 18 should have been fixed in that trivial earlier, http://paste.ubuntu.com/627679/
<jimbaker> hazmat, ^^^
<kirkland> SpamapS: cool
<kirkland> SpamapS: point me to the diff?
<SpamapS> kirkland: yeah when its ready. Have to fix some stuff in memcached
<SpamapS> kirkland: and this is bringing to light the fact that the munin formula needs to not just copy/paste everything. ;)
<SpamapS> well.. the munin formula needs to be a machine formula.. but thats another matter entirely. :-P
<hazmat> jimbaker, yeah.. i think that
<hazmat> is a fallout from a trivial fix i did earlier today
<hazmat> jimbaker, it should be a one liner to fix it with the merge if your game
<jimbaker> hazmat, yeah, just didn't have this one additional change of adding the % id
<jimbaker> hazmat, absolutely i can make that change, i will work on it in next 20 min or so
<_mup_> ensemble/expose-provision-service-hierarchy r297 committed by jim.baker@canonical.com
<_mup_> Removed debugging
<SpamapS> kim0: hey I made a post tagged ubuntu-cloud (and cloud) and its not showing up on cloud.ubuntu.com
<jimbaker`> bcsaller, hazmat, niemeyer - i need a trivial on lp:~jimbaker/ensemble/trivial-test-upgrade-formula; the diff is here: http://paste.ubuntu.com/627698/
<jimbaker`> this fixes the broken test in trunk
<niemeyer> jimbaker`: hahaha
<niemeyer> jimbaker`: That assertion is great :-)
<niemeyer> jimbaker`: +1!
<jimbaker`> niemeyer, yes, it's kind of funny seeing that %r in the test, it explains a number of rather trivial things ;)
<bcsaller> looks good 
<_mup_> ensemble/trunk r257 committed by jim.baker@canonical.com
<_mup_> [trivial] Fixes broken test introduced by trivial in r254 [r=bcsaller,niemeyer]
<_mup_> ensemble/standardize-log-testing r261 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> ensemble/trunk r258 committed by jim.baker@canonical.com
<_mup_> merge standardize-log-testing [r=niemeyer,hazmat][f=795233]
<_mup_> Removed usage and definition of save_logging, reset_logging, and
<_mup_> assertInDefaultLog from codebase in favor of standard log testing.
<SpamapS> kirkland: re working w/o IP.. all of the formulas had at least one commit, some two.. the latest formulas from principia have everything except munin removed from using ifconfig. Works *perfectly*
<SpamapS> bbl
<_mup_> ensemble/expose-provision-service-hierarchy r298 committed by jim.baker@canonical.com
<_mup_> Removed unnecessary import from test_provision
<_mup_> ensemble/set-transitions r241 committed by bcsaller@gmail.com
<_mup_> merge trunk
<_mup_> Bug #798115 was filed: Ensemble is too slow to startup <Ensemble:New> < https://launchpad.net/bugs/798115 >
<niemeyer> Mornings!
<kim0> morning :)
<niemeyer> kim0: Hey, how're things going there?
<kim0> hey .. going fine .. how about yourself
<niemeyer> Waking up from a late night
<kim0> hope it late fun .. :)
<kim0> it was*
<kim0> I just discovered why status takes 5 seconds .. it isn't ssh'ing around the globe .. ensemble itself needs time
<m_3> morning gang
<kim0> I filed a bug
<kim0> m_3: morning :)
<niemeyer> kim0: Time for what?
<kim0> start up
<kim0> check Bug #798115
<_mup_> Bug #798115: Ensemble is too slow to startup <Ensemble:New> < https://launchpad.net/bugs/798115 >
<niemeyer> kim0: This is not startup.. this is status working!?
<niemeyer> kim0: Ensemble is certainly doing more than establishing an ssh connection.. :-)
<kim0> hmm
<kim0> would reading the remote data require that much time
<niemeyer> kim0: Reaching it several times to communicate over zookeeper with a high-latency connection, yes
<kim0> a ha
<kim0> perhaps the protocol is chatty :)
<kim0> if there's any optimizations you guys can do .. that's be great
<kim0> that'd*
<niemeyer> kim0: Agreed :)
<m_3> I've been feeling that one too... we've been on mobile broadband the past few days.  Cable guy due at the new apartment this morning!
<m_3> es has even been timing out on me
<niemeyer> status is particularly sensitive, as it obtains information about everything
<kim0> my RTT is 100ms .. it must need 50 round trips :)
<kim0> and only the bootstrap node was up
<niemeyer> kim0: Assuming non-existing of the world, yes
<niemeyer> non-existence
<kim0> how so ?
<niemeyer> kim0: It takes 100ms to send an empty packet and doing nothing with it
<kim0> heh yeah, all I'm saying is it's perhaps worth a good look into where that time is spent, and if there's some quick wins
<niemeyer> kim0: Making status fast isn't a priority right now.. if we don't make Ensemble more useful, no one will be interested in knowing the status.
<niemeyer> kim0: So let's say this is a problem we're keen on having, for the moment ;-)
<kim0> hehe :) I suppose status is not the only thing that's slow .. but yeah, it sure is low priority
<niemeyer> kim0: It's likely the slowest
<niemeyer> kim0: By a significant margin
<m_3> user perception thing... you expect bootstrap and deploy to be slow... and status to be fast
<kim0> yeah, my uneducated guess was about the opposite :)
<m_3> but it's the opposite
<niemeyer> kim0: After removing the constant factors, of course (ssh, etc)
<kim0> m_3: exactly
<m_3> I agree, it's not high priority right now, but good to mention
<niemeyer> m_3: Well.. it's _really_ perception, because they _are_ the slowest
<niemeyer> m_3: The difference is that the command line isn't hooked up
<m_3> yup
<niemeyer> Which means there isn't a guy bashing the enter key and screaming "FASTER! FASTER!" on the other side.
<kim0> niemeyer: would having an interactive ensemble tool help
<niemeyer> kim0: A bit.. not much
<niemeyer> kim0: Reducing the number of roundtrips is the real win
<kim0> yeah
<m_3> it certainly seems like it'd be cool to have an 'ensh' interactive shell
<m_3> but I don't know the real utility of that other than blocking commands to give the user feedback and maybe sense of control
<kim0> maybe it could have a replica of the zk environment ? would that help a lot overcome high latency networks ?
<m_3> not important
 * kim0 never used zk, and is getting ready for random things to be thrown at him :)
<m_3> zk docs even call them zk "ensembles"
<kim0> hehe
<niemeyer> kim0: Brilliant stuff in the tutorial
<kim0> yeah more exposure to docs basically
<niemeyer> jimbaker`: ping
<jimbaker`> niemeyer, hi
<niemeyer> jimbaker`: Hey man
<niemeyer> jimbaker`: What's up there?
<jimbaker`> niemeyer, looks like it's another nice day here in colorado
<niemeyer> jimbaker`: Nice :-)
<niemeyer> jimbaker`: I'm review the expose branches, and would just like to bring up an idea about function naming
<jimbaker`> niemeyer, sounds good
<jimbaker`> definitely would like to hear any ideas on such things, the provision code had to come up with a number, and they probably can be improved
<niemeyer> jimbaker`: If I tell you something like.. hmmmm.. "Can you please check the schedule?", what kind of assumption can you make?
<jimbaker`> we are probably dealing with issues around time, or more generally events
<niemeyer> jimbaker`: Right, but it's pretty hard to figure what I'm trying to figure, right?
<niemeyer> jimbaker`: Same thing as.. "Hey, can you please go down and check the street?"
<niemeyer> jimbaker`: This an "empty" request, if you see what I mean
<niemeyer> It's an
<jimbaker`> niemeyer, correct, the work "check" is one of those words that feels like a crutch word
<jimbaker`> extremely vague
<niemeyer> jimbaker`: Yeah..
<niemeyer> jimbaker`: It'd be much easier to say, "Is there traffic in the street?"
<jimbaker`> so ideally it would not be used. so the question is, what replaces it to be more precise
<niemeyer> jimbaker`: Or, "Ensure our slot is booked in the schedule"
<niemeyer> jimbaker`: This is detailing the intended _outcome_, rather than how one will do it
<niemeyer> jimbaker`: check_firewall_settings, watch_service_changes, ... they have that feeling
<niemeyer> jimbaker`: open_close_ports() is a much better name than check_firewall_settings, as an example
<jimbaker`> niemeyer, sounds like a great suggestion for that function name
<jimbaker`> watch_service_changes of course parallels the existing watch_machine_changes
<niemeyer> jimbaker`: They are both bad as well
<niemeyer> jimbaker`: For the same reasons
<jimbaker`> so there was some convention already existing, but as you mention, bad
<niemeyer> jimbaker`: and worse, they look like a request to watch..
<niemeyer> jimbaker`: Which isn't the case
<niemeyer> jimbaker`: Agreed.. consistency is better than nothing
<jimbaker`> niemeyer, agreed, going outside of the narrow scope of this particular file, the larger convention is watch means to actually *watch*
<niemeyer> jimbaker`: But we can as well change both, consistently :)
<niemeyer> jimbaker`: Right, exactly
<niemeyer> jimbaker`: and for the same reasons above, this is vague
<niemeyer> jimbaker`: Watch for what?  Will have to read the doc/code to know
<jimbaker`> niemeyer, sure, i can definitely make that change
<jimbaker`> in both names
<niemeyer> jimbaker`: E.g. watch_expose_flag is a good name for watch_service_changes
<jimbaker`> the first is to at least use our cb_ prefix to indicate callbacks
<niemeyer> jimbaker`: I'm ambivalent about it.. they are generally good hints when we can't figure something better that actuall describes the intention
<niemeyer> jimbaker`: Otherwise, twisted is all about callbacks.. we'll go crazy :)
<niemeyer> jimbaker`: Note that we generally use cb_<watcher function name>, if I remember correctly
<jimbaker`> niemeyer, certainly twisted is always about the callback, fortunately inlineCallbacks can make it more linear
<jimbaker`> but that's another line of thought
<niemeyer> jimbaker`: Which means we have the same problem as above.. the function name has no hints about its intention
<niemeyer> jimbaker`: Perhaps a good way to put the distinction is that one is "this is why someone is calling me" and the other is "this is what I'm going to do"
<niemeyer> jimbaker`: The latter is generally more useful when reading the code
<jimbaker`> niemeyer, indeed
<jimbaker`> my 10 year old daughter just got into this in her robotics camp last week. i looked at the function names she was writing, and they were very explicit on what the function was going to do
<jimbaker`> (this was in C, she told me on the 2nd day she wanted to be a programmer. i told her that in the future, all professionals will be programming in some way ;) )
<niemeyer> jimbaker`: I'm not entirely sure.. I had that feeling in the early 90s, but nowadays it feels like it's getting harder to get people interested in the details of the problem
<niemeyer> Just too easy to be a user, I guess
<jimbaker`> i consider building a spreadsheet model to be programming, or similar tasks. it can be done visually or with code, or both
<jimbaker`> but again the aspect of functions that describe concrete functionality, and that we can effectively reason about them because we have good names, that was very clear in my daughter's code and ideally in any code we all write
<niemeyer> jimbaker`: COol
<niemeyer> jimbaker`: Re. [3], can you extend a bit on why you think it's not doable?
<jimbaker`> niemeyer, this is the collapsing together of both dictionaries
<niemeyer> jimbaker`: Yeah
<jimbaker`> so certainly doable, the question was whether it would make things harder to read
<niemeyer> jimbaker`: Agreed.. I had the impression it'd make them easier
<niemeyer> jimbaker`: So I'm interested in your perspective
<jimbaker`> i need to track two distinct things here. whether or not a watched service is exposed or not. if it is exposed, we then start a watch on its service units
<jimbaker`> and then what watches have been established for each service unit
<jimbaker`> self._watched_services tracks the first; self._watched_service_units the second
<niemeyer> jimbaker`: Yes, as I understand it, you need to track: a) Whether a service is exposed or not; b) What units in this service are being watched
<niemeyer> jimbaker`: Is that correct?
<jimbaker`> niemeyer, not quite
<niemeyer> jimbaker`: Ok.. that may be the detail I'm missing then.
<jimbaker`> niemeyer, because i need to track two watches for a service, because that's the api i'm working with
<jimbaker`> 1) watching the service's exposed flag; 2) watching its service units
<niemeyer> jimbaker`: Isn't that a and b above?
<jimbaker`> niemeyer, sorry, it was a little ambiguous
<jimbaker`> niemeyer, for each service unit, we need to watch its ports
<jimbaker`> so that's the watch per service unit
<niemeyer> jimbaker`: Ok, so.. it feels like we're talking about the same thing
<niemeyer> jimbaker`: So, objectively..
<niemeyer> jimbaker`: self._watched_services tracks which services are watched.. and it's really a set rather than a dictionary
<niemeyer> jimbaker`: It's values aren't used for anything
<jimbaker`> so _watched_services manages watches at the granularity of one watch per service (watch_exposed_flag, watch_service_units); _watched_service_units tracks at the granularity of a service unit
<niemeyer> jimbaker`: self._watched_service_units is a defaultdict, which means the information of whether a key exists or not is not being used in any good way
<jimbaker`> niemeyer, actually _watched_services value is used to indicate whether the watch_service_units watch has been started
<niemeyer> jimbaker`: Make the latter a real dict, and use presence information in a good way
<niemeyer> jimbaker`: This way you don't need two dictionaries, nor the clean up you have in other functions
<jimbaker`> False - only watch_exposed_flag; True - also started watch_service_units
<niemeyer> jimbaker`: I understand.. can you please point out the place in the code that invalidates the suggested design?
<jimbaker`> niemeyer, i think what you are suggesting is something like the following:
<jimbaker`> the presence of keys in _watched_service_units indicates that it is exposed; the corresponding value may be None (or whatever value is useful) if there is no watch on its service units; when that watch is established, change to a set, which collects all the watches on the corresponding service units' ports
<jimbaker`> niemeyer, i can certainly implement such a design. it just seemed more complicated
<jimbaker`> niemeyer, if you have another design in mind, i don't see it
<niemeyer> jimbaker`: Maybe it's more complicated, but I'm trying to understand why..
<niemeyer> jimbaker`: It feels like you have double bookkeeping, e.g.
<jimbaker`> niemeyer, alternatively i can use a multi-level dict or equivalently an object to represent this mapping
<niemeyer> +                self._watched_services[service_name] = False
<niemeyer> +                self._watched_service_units.pop(service_name, None)
<jimbaker`> niemeyer, agreed, that is an implication of the design
<niemeyer> jimbaker`:
<niemeyer> +            self._watched_services.pop(service_name, None)
<niemeyer> +            self._watched_service_units.pop(service_name, None)
<jimbaker`> on the other hand, there is no need to do any conditionals on what the value of _watched_service_units is
<niemeyer> jimbaker`: Yeah, and I'm trying to understand why the complication is necessary.
<niemeyer> jimbaker`: Sure, because you're always doing it in a different dictionary that you maintain sideways :-
<niemeyer> )
<jimbaker`> so you make that one tradeoff in a couple of places where the parent key (so to speak) is deleted and one's maintaining referential integrity (so to speak). but arguably resulting in simpler code
<niemeyer> jimbaker`: I don't see the tradeoff.. you're effectively discarding presence information in one and using only presence information in the other
<niemeyer> jimbaker`: Either way, never mind
<niemeyer> jimbaker`: I should have just tried out.. it'll be easier
<niemeyer> jimbaker`: You may well be right.. I'm just missing the "This is a better design, because ..." sentence.
<jimbaker`> niemeyer, no worries, just wanted you to know i thought through the implications of your question
<jimbaker`> and looked at the implementation. it seemed more complicated, at least at what i tried, vs the relative simplicity of what's being done now
<jimbaker`> but again at the cost of two dicts
<niemeyer> jimbaker`: If it's simpler, let's keep it.
<niemeyer> jimbaker`: I'm just missing the "This is simpler, because ..." explanation.. but it's just me failing to get it.
<jimbaker`> again, i think the lack of conditionals makes it simpler. instead we trade that for invariants (hence the use of code like self._watched_service_units.pop(service_name, None) - it may be there, it may not, it doesn't matter)
<jimbaker`> my concern was that there was enough logic in this code already, i didn't want to increase its level
<niemeyer> jimbaker`: Yeah, no conditionals is simpler.. why would we need conditionals? (rhetoric question)
<jimbaker`> niemeyer, ;)
<niemeyer> jimbaker`: This is rhetoric in the sense that this is the kind of thing that can make a long conversation about design short..
<niemeyer> jimbaker`: "I need to track the foobar of the boodoom." finishes such conversations in no time.
<jimbaker`> niemeyer, sure, that's absolutely right
<niemeyer> jimbaker`: "Code without conditionals is simpler." gives no hints.
<jimbaker`> also having dictionaries state this is what i track, and my invariant is maintained, that's nice too
<jimbaker`> naively, it would be nice if it would nice if it were possible to avoid all such extra state, it is possible to introspect for a watch, but that's not really going to work as i understand it
<niemeyer> Man..
<niemeyer> This whole inlineCallback thing is tricky..
<niemeyer> While it brings back the nice feeling of straight code back, it also introduces concurrency issues which are very easy to ignore.
<niemeyer> jimbaker`: This is what I mean
<niemeyer> jimbaker`: http://paste.ubuntu.com/628037/
<niemeyer> jimbaker`: Untested..
<niemeyer> jimbaker`: But theoretically it shouldn't break the tests, if they are not deeply dependent on the implementation
<niemeyer> jimbaker`: There are also a couple of points inlined in the diff worth noting
<niemeyer> jimbaker`: I'll step out for lunch.. please let me know what you think
<niemeyer> jimbaker`: About the approach, not lunch ;-)
<jimbaker`> niemeyer, enjoy your lunch. i certainly have food opinions, but they are to express remotely
<jimbaker`> hard to express
<jimbaker`> niemeyer, i'll take a look at the diff thanks
<jimbaker`> niemeyer, still taking a look at you diff to see why it doesn't work
<hazmat> jimbaker`, i was looking at the zookeeper test setup, and was curious how the contextmanager and generator here works with the test setup... it looks like a nice way for us to do fixtures
<hazmat> jimbaker`, if you have a few minutes to discuss i'd like to do an audio chat
<jimbaker`> hazmat, i'm looking at that code again, one moment
<jimbaker`> hazmat, this is a basic example of a context manager. sure, we can talk now if you'd like. mumble? skype?
<niemeyer> jimbaker`: Thinking over lunch, I understand why you preferred the other approach.. the first dictionary is actually a three-state one
<jimbaker`> niemeyer, correct
<niemeyer> jimbaker`: So _watched_services[name] = False is actually not entirely correct
<niemeyer> jimbaker`: The service _is_ being watched
<jimbaker`> sure, it's not just being watched for its service units. but it does correspond to being exposed or not, hence the boolean
<niemeyer> jimbaker`: Yeah, booleans are lovely.. :-)
<niemeyer> jimbaker`: I'm happy with either approach, but it needs clarification.. I'll see if I can make tests pass with this and compare
<jimbaker`> niemeyer, yes, they say exactly what i mean them to be ;) i hope
<niemeyer> jimbaker`: _watched_services[foo] being False means the service is not watched.
<jimbaker`> niemeyer, yeah, obviously there's still one ref to watched_service_units in your diff, but there's more to it than that
<niemeyer> jimbaker`: No other way to interpret it
<niemeyer> jimbaker`: Yeah, leave that with me
<jimbaker`> niemeyer, so maybe it should mean - _watching_service_units or something like that and _watching_service_unit_ports, although this seems too wordy
<jimbaker`> but again a tweak on the names can make this clearer
<niemeyer> jimbaker`: If the single design doesn't work, we can come up with something like _watched_services[foo] = Exposed/Unexposed
<niemeyer> jimbaker`: Erm.. single dict design
<jimbaker`> niemeyer, that's also a good choice. maybe they can define nonzero too
<jimbaker`> ;)
<niemeyer> jimbaker`: nonzero?
<jimbaker`> niemeyer, i believe it's __nonzero__ to be precise, as called by bool
<jimbaker`> new objects that act just like booleans!
<niemeyer> jimbaker`: Heh
<jimbaker`> biab, i'm going to get some coffee
<niemeyer> Enjoy
<negronjl> niemeyer:  where is the ensemble documentation?  I used to have it but, can't find it now.
<niemeyer> negronjl: Should be in https://ensemble.ubuntu.com/docs
<negronjl> niemeyer:  perfect!  thanks!
<niemeyer> negronjl: np!
<negronjl> niemeyer: does ensemble take care of the security groups in AWS?  ie:  if I have a service that requires port 8112 open, would it take care of opening that port and closing whatever is left ?
<niemeyer> negronjl: We're working on that _right now_, literally :-)
<niemeyer> negronjl: Both me and jimbaker` are hacking on it as we speak
<negronjl> niemeyer:  perfect.  thx
<niemeyer> negronjl: The idea will work like this:
<niemeyer> negronjl: The formula declares what ports it needs open
<niemeyer> negronjl: Via a call to, say, open-port 80/tcp 
<niemeyer> negronjl: That doesn't actually _open_ the port directly, though
<niemeyer> negronjl: The admin is in control of which services are exposed
<niemeyer> negronjl: So you can do something like
<niemeyer> negronjl: ensemble expose myblog
<niemeyer> negronjl: This command will make Ensemble check which ports the formula declared as open, and will open the firewall for them, specifically
<negronjl> niemeyer:  that's great 
<niemeyer> negronjl: So the ports that are punched through the firewall are those that both: a) Have been declared as open-port by the formula; and b) Have been exposed
<niemeyer> negronjl: What we have in trunk right now is "everything is open party"
<negronjl> niemeyer:  I noticed. hence the question :P
<niemeyer> negronjl: One jimbaker` finished the last few bits, we'll have the full feature
<negronjl> niemeyer:  perfect.
<niemeyer> negronjl: I'm just assisting jimbaker`
<niemeyer> negronjl: Once jimbaker` finishes the last few bits, we'll have the full feature
<niemeyer> (that was the real sentence :-)
<kim0> I wonder if "open port" means open to the general Internet .. is it possible to open to some other service only ? like mysql being accessible from mediawiki only
<jimbaker`> kim0, that's certainly a fair question - for now open-port doesn't mean that, it's the exposed setting that means open to the internet (and interprets opened ports accordingly)
<jimbaker`> kim0, hope that makes sense
 * kim0 scratches head
<kim0> so it's not possible today to open to a certaing sg right ?
<jimbaker`> niemeyer just went over how these two pieces fit together with negronjl, fwiw
<jimbaker`> kim0, there is no current open design work to do what you ask
<kim0> yeah got it
<jimbaker`> kim0, but i can imagine that we could leverage open-port as you describe to get at different security zones along the lines of what SGs in general can do
<niemeyer> kim0: In a way, open-port actually means exactly that.. it tags which port _should_ be open to whoever is consuming that service
<niemeyer> kim0: As jimbaker` points out, we're not working on inter-service-unit handling of that now, though
<kim0> yeah got it .. thanks
<niemeyer> Yeah, what jimbaker` says
<jimbaker`> kim0, we should certainly capture this in a bug
<_mup_> txzookeeper/session-event-handling r44 committed by kapil.foss@gmail.com
<_mup_> managed zk cluster api
<_mup_> txzookeeper/session-event-handling r44 committed by kapil.foss@gmail.com
<_mup_> managed zk cluster api
<negronjl> ok guys.  so I have hadoop-master set up in ensemble but, I have a question before I move forward with the slave nodes.  Any way of changing the instance type to m1.large ( or anything else for that matter )?
<hazmat> negronjl, not at the moment
<negronjl> hazmat:  thx.  no worries.  I'll deal with it.
<niemeyer> hazmat: Hmm
<niemeyer> hazmat: What about default-instance-type?
<hazmat> we've talked before about using a separate ebs volume per unit instead of an ebs instance, which would allow for some sort of expansion.. but that's really post lxc isolation 
<hazmat> negronjl, as  niemeyer points out you can switch the default instance type for the entire environment if you wish, but not per unit/machine atm
<negronjl> hazmat:  where would I change that ?
<hazmat> or service for that matter..
<hazmat> negronjl, in ~/.ensemble/environments.yaml
<negronjl> hazmat:  perfect!  thx.
<hazmat> negronjl, https://ensemble.ubuntu.com/docs/provider-configuration-ec2.html
<negronjl> hazmat: perfect! thx.
<niemeyer> hazmat++ for actually documenting it :)
<hazmat> niemeyer, it was getting hard to remember ;-)
<niemeyer> Yo compiz!  Gimme my cursor back!
<niemeyer> No game :(
<kim0> I'll give unity another chance by 11.10 :)
<hazmat> kim0, when it comes to compiz its underlying unity and classic.. so its hard to escape from its bugs..
<kim0> oh yeah .. I do run gnome without unity .. rock solid
<kim0> without compiz I mean
<niemeyer> jimbaker`: http://paste.ubuntu.com/628096/
<niemeyer> jimbaker`: All tests pass
<hazmat> kim0, how do you set that up.. if i login under classic, i still have compiz as the window manager afaics
<niemeyer> jimbaker`: Sorry, let me add a proper comment on the dict
<hazmat> i'm still averaging a reboot about every 6hrs.. or some sort of re-login after the xsession crashes
<jimbaker`> niemeyer, ok, i like that much better
<kim0> hmm .. I am indeed running metacity .. no idea how to configure it though 
<jimbaker`> niemeyer, it reads better than my version, so thanks
<kim0> hazmat: perhaps gconf-editor â desktop -> gnome -> applications -> window_manager and set to metacity
<niemeyer> jimbaker`: No problem
<niemeyer> jimbaker`: Here is the version with the comment: http://paste.ubuntu.com/628102/
<hazmat> kim0, awesome thanks.. i'll try that out
<niemeyer> jimbaker`: I was glad to dive in as well.. the overall logic feels good, even though I'm still a bit concerned with concurrency issues
<jimbaker`> niemeyer, comment is also good, and i like how the dict is maintained
<jimbaker`> niemeyer, i understand your concern, but after being in that code for a while, i think the concurrency aspects are solid
<niemeyer> jimbaker`: I'm not entirely sure.. I'm concerned in general, not just with that one piece of code
<niemeyer> jimbaker`: We haven't been considering locks, etc, very often.  Twisted makes us lazy in that regard, but every yield in a function is putting control away from the function, and the world can change at that point.
<jimbaker`> niemeyer, yeah, that's definitely what i think whenever i have a yield
<niemeyer> jimbaker`: As an example, in that same diff, look at the original cb_check_service_units function
<niemeyer> jimbaker`: You were making assumptions about the state of the world within it
<niemeyer> jimbaker`: because you've _tested_ it
<niemeyer> jimbaker`: But every time you "yield", the world can change
<jimbaker`> niemeyer, the world may change. not completely arbitrary
<niemeyer> jimbaker`: Concrete example:
<niemeyer> -                if unit_name not in self._watched_service_units[service_name]:
<niemeyer> jimbaker`: What guarantees that the service wasn't unexposed on the yield within the for loop?
<niemeyer> jimbaker`: Not arbitrary, but hard to keep the world state in mind..
<negronjl> any way to ssh into the instances to get a better view of what's going on ?
<niemeyer> negronjl: ensemble ssh <machine number>
<niemeyer> negronjl: Or,
<jimbaker`> niemeyer, thanks that is in fact an invalid assumption
<niemeyer> negronjl: ensemble ssh $UNIT_NAME/$N
<negronjl> ahh...perfect!  thanks niemeyer
<niemeyer> jimbaker`: It's hard.. I don't blame you
<niemeyer> negronjl: np!
<niemeyer> negronjl: You may also be interested in "ensemble debug-hooks"
<niemeyer> negronjl: It's quite fun
<negronjl> niemeyer:  I'll make sure to check it out
<niemeyer> negronjl: A bit like "gdb for hooks" :-)
<negronjl> niemeyer:  perfect!  exactly what I'm looking for
<kim0> negronjl: check out https://ensemble.ubuntu.com/docs/write-formula.html#debugging-hooks and let me know how to improve it :)
<jimbaker`> niemeyer, in your new code you ignore this transition from the set to NotExposed (or deleted), because it's safe to setup such watches, they will simply terminate immediately
<jimbaker`> niemeyer, so keeps things simple
<jimbaker`> niemeyer, and correct
<negronjl> niemeyer:  hadoop-master and hadoop-slave is done.  Would you point me to how to submit this for you guys?
<negronjl> niemeyer:  I pressume that I would submit this to principia ??
<negronjl> niemeyer:  now that I have a better understanding, I plan on porting all of the other orchestra-modules to ensemble.
<kim0> negronjl: check out https://ensemble.ubuntu.com/Principia
<negronjl> kim0:  perfect!  thx
<niemeyer> negronjl: Woah, that's awesome!
<negronjl> thx niemeyer
<niemeyer> negronjl: Man, can't wait to deploy my first hadoop cluster with Ensemble
<niemeyer> hazmat: Have we missed the weekly meeting timing?
<niemeyer> hazmat: Or did I get it wrong?
<niemeyer> kim0: Have you been merging your branches?
<kim0> nope
<niemeyer> kim0: Ok, please let me know when you do the suggested FAQ tweaks then, and I'll comit
<niemeyer> commit
<kim0> niemeyer: that was the branch https://code.launchpad.net/~kim0/ensemble/updating-faq
<kim0> niemeyer: don't know if it's merged or not
<niemeyer> kim0: I know, I just reviewed it
<niemeyer> kim0: https://code.launchpad.net/~kim0/ensemble/updating-faq/+merge/64679
<kim0> ah ok .. will update the branch
<kim0> Is there any scientific explanation to a browser tab spinning/loading for 30mins! I mean tcp should either timeout, or retry right
<niemeyer> kim0: Long polling is one of them
<niemeyer> kim0: I.e. its constantly retrying to wait for updates from the server
<niemeyer> it's
<kim0> like gmail .. I dont think it keeps spinning in that case
<niemeyer> kim0: There are different implementations possible
<niemeyer> kim0: http://tools.ietf.org/html/rfc6202
<kim0> thanks!
<kim0> although I'm actually inclined to think it's more of a bug than a website feature .. since refreshing the page, would finish loading in a couple of seconds
<kim0> any way nvm
<niemeyer> kim0: Possibly
<SpamapS> kim0: did you get my msg about my post not syndicating onto cloud.ubuntu.com ?
<kim0> yeah just replied
 * kim0 mostly afk 
<negronjl> niemeyer:  Filed bugs #798421 (hadoop-master) and #798422 ( hadoop-slave).  Feedback is well appreciated.
<_mup_> Bug #798421: new-formula (hadoop-master) <new-formula> <Principia Ensemble:New> < https://launchpad.net/bugs/798421 >
<_mup_> Bug #798422: new-formula (hadoop-slave) <new-formula> <Principia Ensemble:New> < https://launchpad.net/bugs/798422 >
<niemeyer> negronjl: Awesome, thanks a lot!
<negronjl> niemeyer:  np.  I'll be working on the rest of 'em shortly.
<hazmat> negronjl, this is the hdfs name node and job tracker re hadoop-master?
<negronjl> hazmat:  I don't quite get your question
<hazmat> negronjl, i'm just trying to understand what bits are managed by the hadoop-master formula
 * hazmat digs through the source
<negronjl> hazmat:  I have to go to a meeting now.  Do you mind if we talked about this in a while ( 30 minutes or so )
<hazmat> negronjl, sounds good
<SpamapS> interesting
<SpamapS> 3 interfaces, does that work?
<SpamapS> negronjl: have you deployed these yet?
<negronjl> SpamapS:  I have
<hazmat> SpamapS, any reason you thought they wouldn't work?
<SpamapS> The only confusing part is the 3 interfaces
<SpamapS> ensemble-log "Point your browser to http://${public-hostname}:50070"
<SpamapS> Also the variable public-hostname isn't ever set
<hazmat> hmm.. yeah.. three interfaces under one name is a bit strange
<hazmat> not sure that's valid
<SpamapS> Well there's really no reason for 3 interfaces
<SpamapS> negronjl: can you explain what you were trying to accomplish there?
<SpamapS> negronjl: (BTW this is *really* cool)
<negronjl> SpamapS:  let me get through my meeting and I'll work with you guys on this
<SpamapS> m_31: ping, we're discussing hadoop, you should be paying attention. :)
<SpamapS> negronjl: ahh yes! when you have time then, just ping us
<hazmat> negronjl, only the last interface will survive, its over-writing a duplicate key else
<hazmat> negronjl, ttyl
<SpamapS> ahh more abuse of the "what is my IP" paradigm. I feel quite personally responsible for that.
<hazmat> negronjl, and i agrew with SpamapS this is very cool
<hazmat> SpamapS, almost as much i ;-)
<SpamapS> I'm sure hostnames will suffice for the places where IP is being used.
<hazmat> SpamapS, i thought you'd switched out principia to getting things off ifconfig?
<SpamapS> The one place where I'm resolving them in the formula into IP's, is memcached because I don't want web requests blocking on DNS.
<hazmat> instead of the md server.. but i guess the dns name still needs the md server or external actor providing this info
<SpamapS> hazmat: just yesterday I switched almost everything to getting things from DNS
<hazmat> SpamapS, sure... but they get cached typically so the per request overhead should still be neglible
<SpamapS> I'm looking into whether or not to also include running a local caching name server as well.
<SpamapS> hazmat: they don't get cached by PHP very effectively. :-P
<SpamapS> hazmat: it does the equivilent of nscd .. cache it "forever"
<SpamapS> hazmat: but yeah, it may not be worth it to resolve in formula even in that case.
<SpamapS> I think I'll add a principia proof warning about querying the metadata service
<m_3> negronjl: just branched them... awesome!
<_mup_> ensemble/status-dot-output r259 committed by bcsaller@gmail.com
<_mup_> fix for Bug #792448, unsafe labels in dot graph
<_mup_> ensemble/status-dot-output r260 committed by bcsaller@gmail.com
<_mup_> example of previously bad input, used by tests
<negronjl> ahh...meeting is going long guys.  I've been reading SpamapS about the three interfaces and I think I can get it all done with just the one interface.  give me a few minutes to fix
<_mup_> ensemble/relation-get-eval r228 committed by bcsaller@gmail.com
<_mup_> remove shell variable prefix, its now implicit
<SpamapS> Ok I just added a check in principia's proof command that warns if you use the metadata service directly...
 * SpamapS now starts cleaning all of those up
<negronjl> SpamapS:  can you elaborate on using metadata service directly?
<negronjl> SpamapS: I'm new to this hence all the stupid questions
<SpamapS> negronjl: I feel a lot more stupid trying to grok hadoop than you do asking about my cryptic communication style. ;)
<SpamapS> negronjl: Basically that would mean the formula would be undeployable outside of EC2
<SpamapS> negronjl: in addition, that information is available in DNS. The one thing that isn't is the public hostname, but thats really something we need to make ensemble provide.
<negronjl> SpamapS: ahh...it all makes sense now.  I remember the very "nice" conversation on one of the channels about ipaddr
<SpamapS> negronjl: "nice" :)
<SpamapS> negronjl: the appropriate thing to do is send around hostnames, and if you *MUST* resolve it to an IP, resolve it where you are going to use it, not where you are sending it from.
<negronjl> SpamapS: I can always ifconfig ...... awk .... echo .. .sed ..etc. :)
<SpamapS> That way you can take into account whether or not you have IPv6 .. and any search domains
<SpamapS> negronjl: nooo thats the next evil hack I'm going to add a warning for. :)
<SpamapS> negronjl: hostname -f gives you the FQDN of your machine. Send that.
<negronjl> SpamapS: I knew that would get a kick out of ya :)
<negronjl> SpamapS:  Would it be better to use hostname instead of IP?  Also, where do I get the Public DNS from if not from the meta-data ?
<SpamapS> negronjl: yes, hostname -f should always be resolvable from any node that can reach you directly..
<SpamapS> negronjl: for the public hostname, you can get it from 'whatismyip.com' .. 
<SpamapS> negronjl: just kidding btw.. I think we need Ensemble to provide a machine info tool that works accross any machine provider.
<negronjl> SpamapS:  so, for now, I'll use the hostname -f for internal and meta-data for external ( until I get something better )
<SpamapS> negronjl: when do you *actually* need the public hostname, programattically?
<SpamapS> negronjl: the only space I see you attempting to use it is to print, into the debug log, the hostname people should use to access the server.
<SpamapS> negronjl: which you can get from 'ensemble status'
<negronjl> SpamapS:  I'll do that instead.
<negronjl> SpamapS:  taking it out
<SpamapS> negronjl: \o/
<SpamapS> Now if we only had an LXC machine provider...
<negronjl> SpamapS:  I pushed all of the changes we discussed here (one interface instead of three, hostname -f instead of meta-data, remove the public-dns bit )
<SpamapS> pulling
<jimbaker> SpamapS, want to discuss bug 766317 ?
<_mup_> Bug #766317: debug-log should show relation settings changes <Ensemble:In Progress by jimbaker> < https://launchpad.net/bugs/766317 >
<negronjl> SpamapS:  deploying ( and praying a bit )
<SpamapS> negronjl: principia proof reports this:  W: all formulas should provide at least one thing
<jimbaker> basically this addresses observability of formulas. obviously i can readily write a utility to grab this zk info and show how it changes
<SpamapS> negronjl: for hadoop-slave .. the reason is, if it doesn't provide anything, how can other services consume what it does?
<jimbaker> w/in the constraints of any applied security on zk, of course
<negronjl> Spamaps:  you don't consume anything out of the hadoop-slave nodes
<SpamapS> jimbaker: Right, I'd prefer that we not log all that data (even though we are now.. thats something I think we should change and not log these credentials.
<negronjl> SpamapS:  ... your own wordpess formula (/usr/share/ensemble/examples/wordpress ) doesn't provide anything either :D
<jimbaker> SpamapS, debug-log is certainly not intended for actual logging
<SpamapS> negronjl: yes, thats a bug, it should provide 'website'
<jimbaker> of course as it is right now, it's just going through ZookeeperHandler, so it has the potential to leak to other handlers i suppose
<SpamapS> negronjl: interesting though.. these essentially just contact the master and "help" it..
<negronjl> SpamapS: I guess I can provide some dummy interface if I have to but, you are correct...they just help the master do it's thing 
<SpamapS> jimbaker: the log file that I'm concerned about is the formula log, which seems to be related to debug-log
<SpamapS> negronjl: well it may be that this is an exception worth making.
<SpamapS> negronjl: can the master do anything useful without the slaves?
<negronjl> SpamapS:  not really
<SpamapS> negronjl: so maybe flip them.. master requires: hadoop-slave
<jimbaker> SpamapS, correct as i recall debug-log is effectively collating what goes into agent logs like the formula log, through the standard log handler mechanism in python
<SpamapS> and provides .. whatever it is that other services consume from it.
<SpamapS> negronjl: the reason I wrote that "everything must provide one thing" is because conceptually (while maybe not pragmatically), its true.
<negronjl> SpamapS: let me see if I understand it.  Let me play with it for a bit
<jimbaker> SpamapS, i don't have enough log-fu to know if it is possible to ensure that some things are never written to a specific handler
<SpamapS> negronjl: before you go too far down that rabbit hole..
<negronjl> SpamapS:  yeah?
<SpamapS> negronjl: This seems like just the beginning. How do other services utilize the master?
<jimbaker> SpamapS, but we could make it such that by default it is not written to such formula logs, that should be doable
<negronjl> SpamapS:  they upload files to it  (jar files and such) for hadoop to process
<SpamapS> negronjl: also one other mistake your formulas have, that is not always obvious, is that sometimes your 'relation-get' won't return anything, because the other side won't have done its 'relation-set' just yet... you have to test that the values are actually set.
<jimbaker> SpamapS, anyone who could overwrite this presumably can sub in arbitrary code in anyway
<jimbaker> SpamapS, so maybe that will address your concern?
<negronjl> SpamapS:  should I trust that ensemble will re-run the script when the relation-set part gets done?
<SpamapS> jimbaker: I'd rather just never see values logged by ensemble. Its far simpler for formula authors to choose when they do that, if somebody needs more observability, they should use debug-hooks or just alter the formula.
<SpamapS> negronjl: yes
<jimbaker> SpamapS, well at some point there's going to be a utility written for this because it's rather useful in my experience
<SpamapS> negronjl: if you look, a lot of the  -changed formulas do relation-get, and if all the values aren't set, just exit 0
<negronjl> SpamapS:  ok.  I can do that
<SpamapS> jimbaker: useful and secure are often at odds. :)
<jimbaker> given the ease of access to ZK, even if it's simply third party
<jimbaker> SpamapS, agreed on how security is always getting in the way ;)
<SpamapS> jimbaker: I don't mind that the data is in zookeeper at the time. But I'm considering instances where people may exchange something like tokens or encryption keys and then delete them, but they might be useful for decoding sniffed traffic later.
<SpamapS> jimbaker: http://dev2ops.org/storage/WallOfConfusion.png
<jimbaker> SpamapS, yeah, that's a good one for scenarios like this
<SpamapS> negronjl: I *love* that the heavy lifting is all done in debconf
<negronjl> SpamapS:  we have iamfuzz to thank for that :)
<SpamapS> jimbaker: Maybe if the agents didn't log this *by default* (they seem to log at DEBUG level right now) I'd be more inclined to support it.
<SpamapS> negronjl: so why aren't these in the Ubuntu archive?
<jimbaker> SpamapS, for stuff like that it's just a matter of choosing appropriate levels and handlers
<negronjl> SpamapS:  they currently depend on [sun|oracle]-java
<negronjl> SpamAps:  currently working with cloudera to fully support openjdk
<SpamapS> negronjl: multiverse would allow for that
<jimbaker> SpamapS, so we could simply have the policy that debug-log captures debug, but the default level is INFO or higher, something like that
<negronjl> SpamapS:  I think it's going into partner but, I'm not sure.
<SpamapS> jimbaker: Right. the problem is that i'm fairly certain I have no way of changing the debug log because of the way unit agents are started. ;)
<SpamapS> negronjl: partner has the added benefit of being turned on by default. :)
<negronjl> SpamapS:  true that :)
<SpamapS> negronjl: ok, so people "upload" stuff to these as jars. Is there a standard way to do that?
<jimbaker> SpamapS, i need to run, ttyl
<SpamapS> like, WebDAV, scp, ftp?
<negronjl> SpamapS:  not really...you have to create a directory then, change permissions, then change user, then ( and only then ) run your "job"
<negronjl> SpamapS:  normally I have done it using scp
<negronjl> SpamapS: and ssh
<SpamapS> negronjl: Ok, because thats what the master should end up "providing"
<SpamapS> looks like there's a website too
<SpamapS> negronjl: so provides website: interface: http .. and then just set the hostname / port.
<negronjl> SpamapS:  The master has two websites
<negronjl> SpamapS:  one on port 50030 and another one ( for a different purpose but just as important ) on port 50070
<negronjl> SpamapS:  so, so far your suggestion is for the slave to provide hadoop-master and for the master to provide website: interface http ?
<negronjl> SpamapS: If so, can I provide both pages (50030 and 5070)?
<SpamapS> negronjl: you don't have to call it 'website'
<SpamapS> negronjl: you can do 'website-foo' and 'website-bar'
<SpamapS> negronjl: what are the two ports' purposes?
<SpamapS> negronjl: the slave should provide hadoop-slave .. the master should require hadoop-slave, and provide those two websites.
<niemeyer> SpamapS: If the protocol is the same, both interfaces should be named the same way
<SpamapS> niemeyer: same interface, different relation name
<niemeyer> SpamapS: Right
<niemeyer> SpamapS: The interface should be "website", right?
<niemeyer> SpamapS: That's what we agreed yesterday, at least
<SpamapS> http://paste.ubuntu.com/628196/
<SpamapS> the interface is just http
<SpamapS> Or did I forget something?
<niemeyer> SpamapS: Yesterday we agreed to use 'website' as the interface
<SpamapS> Oh
<SpamapS> for what exactly?
<niemeyer> SpamapS: For an interface which had only "url" as a relation setting
<SpamapS> Oh, I wasn't part of that discussion. Interesting.
<m_31> do we want to separate out DFS-type services from mapreduce-type services? 
<m_31> or interpret "website" as "webservice"
<SpamapS> Makes a lot of sense tho. I like the idea of specifying the actual protocol though. Some url handlers don't handle FTP...
<niemeyer> SpamapS: You actually were part of it
<niemeyer> SpamapS: We just didn't understand each other
<niemeyer> SpamapS: I think "http" is too much detail about what the service provides
<niemeyer> SpamapS: Most clients that support "http" will also support "https"
<SpamapS> Yeah that I recall
<SpamapS> I didn't remember that we had settled on URL, but I do like it. 
<niemeyer> SpamapS: So "website" feels like a good name for a user-oriented view that can be both http or https in a "url" setting
<niemeyer> SpamapS: That reminds me, we need to put these in the wiki
<SpamapS> I'd almost say that interface should actually be called "url" .. the website name is just a standard convention for relation name I've been using for web apps.
<niemeyer> SpamapS: https://ensemble.ubuntu.com/Interface/<name>
<niemeyer> ?
<niemeyer> SpamapS: That may be too much
<niemeyer> SpamapS: "url" could be "mongodb://..."
<niemeyer> SpamapS: To make sense it needs some additional sense to make auto-resolving work
<SpamapS> yeah website at least implies "web"
<niemeyer> SpamapS: "website" provides the semantic meaning, without binding to the specific protocol
<SpamapS> For the interface docs.. I've been thinking about it too. I was trying to think if there's a way we can express it with a testing framework that could actually verify if something that says it "provides" "website" does.
<niemeyer> SpamapS: That's pretty interesting.. I think we can do something about that
<SpamapS> niemeyer: one reason I've been doing http specifically is that haproxy and IPVS don't care about urls.. they care about host and port only.
<niemeyer> SpamapS: Well.. they do care about whether it's http or not, IIRC
<niemeyer> SpamapS: haproxy, at least
<SpamapS> but I suppose I can just parse that out relatively easily.
<niemeyer> SpamapS: Agreed
<SpamapS> url, and 'check_url' would be a good optional thing to be able to set.. so that load balancers know the specific url to hit for health checking.
<niemeyer> SpamapS: Definitely.. the interface page in the wiki could document optional settings as well 
<SpamapS> niemeyer: I'd like to have the interface docs in revision control.. not sure if the wiki's history is enough.
<SpamapS> so maybe .rst for the interfaces
<niemeyer> SpamapS: Hmm
<niemeyer> SpamapS: I thought about the wiki to more easily allow the community to contribute/debate
<SpamapS> Yeah.. I can see both sides.
<SpamapS> I think as long as we point to one as "the canonical source of documentation for that interface", it will work.
<SpamapS> Just feels like .rst would be more authoritative.
<SpamapS> at the expense of community members needing to jump through more hoops to document their interfaces.
<niemeyer> SpamapS: Sounds good to me
<niemeyer> SpamapS: We should go with whatever you feel most comfortable with
<niemeyer> SpamapS: and we can change, of course
<niemeyer> SpamapS: But this is an area that will need your attention for sure
#ubuntu-ensemble 2011-06-17
<jimbaker> given that it's easy enough to have the wiki page in rst format, shouldn't really matter in terms of keeping content
<SpamapS> niemeyer: one thing that is proving difficult is parsing out the url in shell scripts. Its a lot easier if the scheme/hostname/port/path are already split up in the relation settings...
<niemeyer> SpamapS: Hmmm.. that's a nice utility we can provide
<niemeyer> SpamapS: url -host $FOO
<devcamcar> hey all, i'm evaluating how ensemble can be used alongside openstack - is there a story around multi-tenancy for ensemble? 
<SpamapS> devcamcar: ensemble *should* work with an openstack implementation, but I don't think anybody has tested it in a while.
<SpamapS> devcamcar: as far as "multi tenancy" .. each tenant would have their own environment separated by access key ID's
<SpamapS> niemeyer: it seems a bit redundant to concatenate all of that every time into a URL, just to split it back out half the time into its parts anyway.
<SpamapS> niemeyer: so far the only two things that require a relation with something that would use the website interface are haproxy and squid reverse proxy.. both of which would need to split it up..
<devcamcar> SpamapS: i assumed as much, thanks!
<niemeyer> SpamapS: A url is by far the most common way to refer to a website..
<negronjl> niemeyer:  do you have an image-id so I can use m1.large instances ?
<negronjl> niemeyer:  actually does ensemble use a particular image-id or can it use a default ubuntu 11.04 image ?
<niemeyer> negronjl: We're using a custom base image mostly to speed booting up.. in the future it'll be just a plain Ubuntu image
<negronjl> niemeyer:  thx
<niemeyer> negronjl: You should be able to just tweak the default-instance-type to m1.large. Is that not working?
<negronjl> niemeyer:  no because m1.large is x86_64
<negronjl> niemeyer:  the iamge that you are using is i386
<niemeyer> negronjl: Hmm.. interesting
<niemeyer> negronjl: We certainly have one for x86_64 readily available
<negronjl> niemeyer:  if you have it, I can use it. no big deal if it is too much trouble.
<negronjl> niemeyer:  m1.large allows me to work faster ( currently working on tomcat )
<niemeyer> negronjl: Yeah, we'll have to generate a new one I think
<negronjl> niemeyer:  no worries
<niemeyer> negronjl: It's not a big deal, we have a tool to automate it
<negronjl> niemeyer:  if/when you get it, ping me and let me know pleaase
<niemeyer> negronjl: Will do.. will try to have that available for you tomorrow
<negronjl> niemeyer: cool. thx
<_mup_> ensemble/relation-get-amp-exceptions r259 committed by bcsaller@gmail.com
<_mup_> fix for #792071
<_mup_> ensemble/relation-context-required r242 committed by bcsaller@gmail.com
<_mup_> Fix for #792071 with proper pre-req branch
<m_31> negronjl: ping
<negronjl> m_31: pong
<m_31> negronjl: hey man... which orchestra scripts are you porting over to ensemble next?
<negronjl> tomcat
<negronjl> I have tomcat working but, I haven't gotten to the clustering part yet.
<m_31> negronjl: cool... are you going by some hitlist?
<negronjl> I'll do that tomorrow or so
<negronjl> nah....just picking them at random.
<negronjl> low hanging fruit + interesting stuff
<negronjl> are you porting any of them ?
<m_31> cool.  I'm trying to get up to speed on formulas in general
<m_31> working on a rails one at the moment
<negronjl> cool.  these guys are pretty helpful.  I know I asked a million stupid questions today alone :)
<m_31> but I'll dump the hadoop one and try to build some demos on top of yours
<m_31> I was just starting on the relations for them, but what you did makes sense
<negronjl> m_31: didn't mean to step all over your work btw.  
<m_31> dude, no problem at all... it was cool to see your relations
<m_31> I'll coordinate with you before hitting any more orchestra ones
<negronjl> cool
<m_31> gonna hit the sack... talk to you tomorrow
<negronjl> m_31:  gnite
<kim0> I wonder if 'ensemble set' has landed yet
<_mup_> Bug #798652 was filed: Ensemble needs bash completion support <Ensemble:New> < https://launchpad.net/bugs/798652 >
 * niemeyer waves
<niemeyer> Such a quiet morning... must be Friday!
<kim0> hmm
<kim0> memcached_ips.append("'%s'" % getaddrinfo(settings['host'],int(settings['port']))[4][0:2].join(':'))
<kim0> breaks when getaddrinfo returns a list
<kim0> need to ping SpamapS 
<kim0> SpamapS: this one seems to work for me â memcached_ips.append("'%s'" % ':'.join(map(str,getaddrinfo(settings['host'],int(settings['port']))[0][4][0:2])))
<kim0> that's in mediawiki/hooks/cache-relation-changed
<negronjl> good morning all
<hazmat_> negronjl: g'morning
<negronjl> hi hazmat_
<m_3> negronjl: morning
<negronjl> morning m_3
<kim0> morning
<niemeyer> negronjl: Morning!
<negronjl> niemeyer:  gmorning
<SpamapS> kim0: Oh I have that change in my local branch and forgot to push it!
<SpamapS> kim0: getaddrinfo *always* returns a list
<kim0> ah ok then
<kim0> SpamapS: is there anything in the default haproxy config that would make it balance unfairly ? (i.e. one machin getting ~100 hits, while the other getting ~20) ?
<SpamapS> kim0: not that I know of, but I have seen that
<SpamapS> kim0: I'm, unfortunately, not at all versed in haproxy-fu
<kim0> yeah me neither
<kim0> I'm testing with: ab -n 1000 -c 100 http://localhost/mediawiki/index.php/Main_Page
<kim0> from the haproxy machine
<SpamapS> was thinking about doing an ipvs formula for that very reason. :-P
<SpamapS> kim0: be careful, ab may be using keepalives
<SpamapS> kim0: honestly, ab *sucks*
<kim0> hehe
<SpamapS> its nothing at all like real traffic
<kim0> I think it needs -k for keep alives
<negronjl> SpamapS:  I noticed that on the wordpress formula, the website-relation-joined is setting the hostname as hostname -s as opposed to hostname -f.  am I missing something ?
<SpamapS> negronjl: no, thats a bug
<negronjl> SpamapS:  ok.  I'll make sure not to do that :)
<SpamapS> negronjl: though one that doesn't affect anything because all machines have the same search domain (.internal)
<kim0> trying to get a nice linear boost in #/sec when adding machine .. real life is complex though ;)
<SpamapS> negronjl: I hardly ever look at the principia wordpress formula.. mediawiki is much better maintained. :)
<negronjl> SpamapS:  any reason why there isn't any apache formula ( just apache ) ?
<SpamapS> kim0: we need to enable the admin interface on some port so we can watch it
<SpamapS> negronjl: because apache is too generic
<negronjl> SpamapS: k
<SpamapS> negronjl: where would it get its files to serve?
<SpamapS> negronjl: I guess we can have one once we have shared filesystem formulas
<negronjl> SpamapS:  you can create the site and then have the user upload the files.
<SpamapS> negronjl: thats not very interesting though, is it?
<SpamapS> like.. thats a single line of cloud-config
<negronjl> SpamapS:  I agree with it being generic but, I'm afraid that if you want ensemble to be widely used, you'll have to have something generic like that unless the plan is to have the users generate all of their formulas on their own and just use ensemble to deploy stuff
<SpamapS> A formula where you also have a webdav interface, authentication of some kind, and rsyncing between nodes via peer relations.. that might be cool. :)
<SpamapS> negronjl: walk me through the user experience you're envisioning
<negronjl> SpamapS:  .... in orchestra I did a generic apache that creates a site for you and then it leaves it up to you to upload the files.  this way you can concentrate on creating your content and not battle apache with the details
<SpamapS> negronjl: does *anybody* create static sites?
<SpamapS> negronjl: I'd say that joomla/drupal/{insert cms here} is far more useful for the generic site case.
<hazmat_> i see a formula around that being more like deploy generic rails app or deploy wsgi app..
<SpamapS> negronjl: Actually a really awesome formula would be a mod_rewrite based box that just aggregates all the other apps into one site.
<negronjl> SpamapS:  so, what if I have my own app.  is the expectation that I will create my own formula that will deploy apache, etc. and all of my files ?
<SpamapS> hazmat_: that just turns into a template.. since to configure generic rails app is not at all the same.. some are db driven, some have queues..
<SpamapS> Though templates would be good
<niemeyer> hazmat_: I was wondering yesterday if we should start using plain Ubuntu images rather than our own customized ones
<niemeyer> I wonder what the timing impact of that would really be
<negronjl> +1 on regular images
<SpamapS> negronjl: well.. you are talking about a single line of bash.. apt-get install apache2 ;)
<negronjl> try to deploy a hadoop cluster or tomcat on m1.small and you'll see what I mean
<negronjl> SpamapS:  and all of the hooks for haproxy as well
<negronjl> SpamapS: and all of the mods that go into my app (php, python, etc.)
<niemeyer> negronjl: We can provide you with a m1.large anyway, and we'll definitely use plain base images in the future no matter what.  Just wondering if we should ignore the extra time it takes to do the extra tweaks before we integrate them.
<SpamapS> negronjl: I like the idea of a template for these.
<hazmat_> SpamapS: true, optional relations for queues would be doable, but your right the heart of it is configuration based dependencies, which fall outside ensemble's scope atm. but a lot of the simple rails/django apps are just the core functionality (db server, and optional memcache).. django at least has a reasonably standard way of configuring this stuff
<negronjl> SpamapS:  I see your poing about it being too generic.  I just think that we can't possibly know all of the apps out there so, something generic for the rest may not be a a bad idea.
<SpamapS> hazmat_: +1 for framework formulas
<negronjl> SpamapS:  Is there something like erb that can be used with ensemble ( does it matter what I use )?
<m_3> seems like it'd be useful for a rails or django framework that pulls code from a user-specified repo
<SpamapS> negronjl: You can use *anything*
<kim0> I use sed :)
<negronjl> SpamapS:  I like that :D
 * SpamapS likes augtool when the lenses are available
<m_3> cat > file <<EOS
<SpamapS> negronjl: you can use *puppet* if you're so inclined. :)
 * negronjl is going to start using BASIC :D
<m_3> ha
<kim0> haha
 * negronjl is j/k btw
<SpamapS> yeah rails-app should actually be a doable formula.. since configuring the known components will be easy. And then people can add to it for their app if they want things different.
<m_3> SpamapS: almost done with it now
<niemeyer> negronjl: Phew! ;-)
<m_3> rails3/passenger/pulls the app from github
 * niemeyer was worried for a second we'd go back to Basic
 * SpamapS earned his first programming $$ writing "PickBasic"
<SpamapS> not PIC BASIC
<niemeyer> SpamapS: I started with Basic as well, in the mid-80s
<m_3> trs80
<m_3> ugh
<SpamapS> http://en.wikipedia.org/wiki/Pick_operating_system
<SpamapS> The OS that you never heard of, but that made DELL successful. :)
<SpamapS> It was the "ONS" .. as in.. "Original No-SQL"
<SpamapS> D3 actually had the distinction of being the first commercial database software available on Linux
<negronjl> trs80 rules!
<m_3> negronjl: with the big 8" floppy drives even
<negronjl> m_3:  I didn't have $$ for that so, I had the cassette tapes :)
<m_3> negronjl: actually seen a revival of one of those recently where they gutted it and put a modern computer inside
<m_3> yeah, they had them at school
<negronjl> m_3:  There is an emulator in the archives: xtrs
<m_3> ok, height of sheer laziness: http://paste.ubuntu.com/628504/
<m_3> negronjl: nice
<m_3> simple rails formula works now... does a basic install, pulls code during db-relation-joined with mysql, then spins up passenger
<m_3> I'll get it into launchpad
<m_3> I'm gonna dig through the docs and examples now to figure out the best way to externalize things like "application_name" "application_repo_url" etc
<SpamapS> bcsaller: did I see you start a branch for fixing the dot output?
<bcsaller> SpamapS: it works now
<SpamapS> m_3: config settings might be all thats needed.. 'ensemble set my-rails-app pullcmd="svn co svn://foo /srv/app && /srv/app/setup.rb"
<m_3> SpamapS: perfect
<SpamapS> m_3: though thats where I think, instead, we should just encourage people to make that into a formula for their app
<bcsaller> SpamapS: it works on that branch rather, it was a minor issue. Just needs reviews 
<SpamapS> Actually, once stacks are available, that will be the right way to do it probably.
<SpamapS> bcsaller: well, let me see if I can perform a review by testing it. :)
<negronjl> How do I define dynamic options to a formula?  Is there a difference?
 * SpamapS wants purty graphs
<m_3> SpamapS: right, understand
<SpamapS> negronjl: thats still in dev IIRC
<negronjl> SpamapS:  k. thx
<SpamapS> negronjl: I was reading a little more about HDFS and hadoop and stuff. I wonder if we shouldn't go a little further and write Flume and Pig formulas so people get a single way to shove data in and query it.
<negronjl> SpamapS:  I would say hive first.  In my experience, it would be hadoop then hive then pig and have no idea what flume is.
<SpamapS> negronjl: with the map/reduce .. I'm not sure I understand how that scales out ... seems like the slave/master only scales the I/O .. not the compute
<negronjl> SpamapS:  but, I agree with the principle
<SpamapS> negronjl: flume is an easy way to get data into hadoop
 * SpamapS reads up on hive
<negronjl> SpamapS:  hive => SQL interface to hadoop
<SpamapS> I thought thats what PIG was ;)
<negronjl> SpamapS:  you write things like SELECT * from <TABLE> and hive translates that into hadoop
<negronjl> SpamapS:  pig => easier hive with less options and bells and whistles
<SpamapS> Ahh cool
<negronjl> SpamapS:  people that use hive look down on the people that use pig :)
<negronjl> SpamapS:  but, I agree to take the formulas a step further..
<negronjl> SpamapS:  Let me finish tomcat first.
<negronjl> SpamapS:  working on tomcat clustering
<SpamapS> So.. since all of thise cool stuff is not in Ubuntu yet.. I'm getting more and more tempted to make principia make a separate mrconfig that includes all this PPA coolness..
<SpamapS> negronjl: tomcat, in the same sense as m_3's rails?
<negronjl> SpamapS:  for now, in the install, I install the PPA for hadoop.
<negronjl> SpamapS:  for tomcat, it's a bit different because tomcat gives you their admin page that allows you to point and click your way to deploy your WAR files
<negronjl> SpamapS: so, you can just isntall tomcat and allow it to cluster, put it behind haproxy for LB and you're done.
<m_3> hbase too?
<SpamapS> negronjl: I'm more wondering what tomcat's relations will be other than website. :)
<negronjl> m_3:  hbase too :)
<negronjl> SpamapS:  none
<negronjl> SpamapS: it also provides another thing:  tomcat-cluster ( that where the clustering takes place ) as a peer
<SpamapS> negronjl: I think we should start exposing an ssh interface for code syncing.
<negronjl> SpamapS:  tomcat has it's own thing on a diff port.
<SpamapS> tomcat clustering?
<SpamapS> like, it already will distribute your war file?
<negronjl> SpamapS: yup
<negronjl> SpamapS: hence the peer interface
<SpamapS> *hot*
 * SpamapS hugs negronjl 
<SpamapS> go go go
 * SpamapS brings a case of redbull
<SpamapS> GO
<negronjl> SpamapS: rofl.  working from Starbucks so, wired on coffee :D
<SpamapS> m_3: half those redbulls are for you. :)
 * SpamapS wishes he had time to write formulas all day.. 
<m_3> I like how the debug-log intersperses parallel traffic... has anybody written an log-scanning tools (to separate out units?)
<negronjl> SpamapS:  Hopefully by Dublin, I'll have enough karma coolness to ask you guys to sponsor my UbuntuContributingDeveloper application ( https://wiki.ubuntu.com/JuanNegron/UbuntuContributingDeveloper ) :)
<negronjl> shameless plug above :D
<SpamapS> m_3: the logs are available on the individual units in /var/lib/ensemble/units/$service-$unitid/formula.log
<m_3> SpamapS:  thanks... undercaffeinated(sp?) at the moment
<m_3> ah, thanks
<_mup_> ensemble/debug-log-relation-settings-changes r259 committed by jim.baker@canonical.com
<_mup_> Initial commit
<SpamapS> negronjl: no way I'm going to trash it so that all you can do is work on formulas.
<SpamapS> ;)
<negronjl> SpamapS:  perfect!  I can see the logs!!!!! :D
<SpamapS> m_3: but also the debug-log has -x and -i
<SpamapS> so like 'ensemble debug-log -i 'demo-wiki*'
<m_3> this is really a great tool!
<m_3> horizontal scaling: for i in {1..10}; do eau rails; done
<negronjl> SpamapS: ( and anyone else who may know the answer ).  what's the way to see as much as possible of what is happening to the formula as it is being deployed ?
<negronjl> I want to know as much as possible  
<negronjl> about what's happening to the instance as the formula is being deployed
<jimbaker`> m_3, it might be interesting to analyze the debug log, possibly with alternative formatters, to get at things like swimlanes/sequence diagrams
<m_3> jimbaker`: right... all the info's there... just gotta be parsed
<hazmat_> m_3: you can specify individual units, machines, services, log channels via the filtering (include/exclude) options on the logger
<m_3> hazmat_: yep thanks... need to spend time digging through docs of options
<SpamapS> negronjl: debug-hooks is probably the best way then
<jimbaker`> m_3, right, and bug 766317, which i'm working on (albeit it may be controversial), will further enhance this
<_mup_> Bug #766317: debug-log should show relation settings changes <Ensemble:In Progress by jimbaker> < https://launchpad.net/bugs/766317 >
<SpamapS> negronjl: you can run the hook with strace and/or gdb at that point. :)
<m_3> SpamapS: yeah, totally wanna dig through debug-hooks next... 
<negronjl> SpamapS:  cool. thx
<SpamapS> its awesome, though I'm bummed that it uses tmux now.. as I'm starting to love byobu's alt-pgup/pgdn
<m_3> lots of cleanup to do first in formula writing (idempotency checking, logging, using debconf as much as possible, etc)
<SpamapS> I do think debug-hooks needs to tell me where the hook is that I'm supposed to be running..
<m_3> SpamapS: never did dig through and figure out the session problems with screen -vs- tmux
<kirkland> SpamapS: niemeyer and i exchanged a few emails;  i think i gave him a solution (using run-one) for the race condition in screen
<kirkland> SpamapS: it solved it for me, at least
<m_3> kirkland: I do love me some screen
<kirkland> m_3: have you tried byobu?
<m_3> kirkland: nope
<kirkland> m_3: give 'er a shot
<m_3> kirkland: actually, I think it installs together now right?
<kirkland> m_3: screen on steriods
<kirkland> m_3: yeah, just type 'byobu' instead of 'screen'
<kirkland> m_3: it's screen under the covers
<niemeyer> kirkland: It's a solution, but it's one that makes me quite unhappy :(
<kirkland> m_3: with a bunch of configuration enhancements, status scripts, and keybindings
<kirkland> niemeyer: and why is that?
<niemeyer> kirkland: As you have noticed, these scripts are already far from pleasant overall
<negronjl> I'm looking for something similar to erb ( the ruby template thing ).  Is there anything that you guys currently use/recommend for ensemble formulas ?
<niemeyer> kirkland: Involving additional locks for something that screen should do internally is what's sad
<m_3> negronjl: I think the idea is that things are flexible, so use any of them
<SpamapS> negronjl: I like augeas if the format is a common one (like ini files) .. otherwise cheetah works.
<m_3> negronjl: we've got a tradeoff between creating examples that show that a user has options
<kirkland> niemeyer: would you prefer that I patched screen?
<m_3> negronjl: but not confuse them with too much noise
<SpamapS> negronjl: honestly though, if you like erb, use erb
<SpamapS> I'd actually say use whatever the most clear and simple thing is to read
<SpamapS> that may be where cheetah actually fails a bit. :-P
<negronjl> SpamapS:  in my formulas, can I add another directory either besides hooks or under hooks for my template files ?
<niemeyer> kirkland: Yes, I'd certainly wish that screen implemented this logic by itself in a reliable way
<SpamapS> negronjl: yes the entire root of the formula is copied
<negronjl> SpamapS:  :D .. tomcat clustering here I come :)
<m_3> SpamapS: negronjl: lib!
<SpamapS> negronjl: you can send binaries, though it is, or at least will be, forbidden until we see a good reason for it. :)
<negronjl> SpamapS:  no binaries...just template files ( text files )
<niemeyer> kirkland: I don't know if you have seen the changeset that introduced the tmux/screen replacement
<kirkland> niemeyer: it was a large one, as i recall
<SpamapS> how about  $formula/hooks/share/templates
<m_3> SpamapS: it can be confusing to users if we start using templating tools that have to be installed during the "install" hooks
<koolhead17> hi all
<niemeyer> kirkland: There were several small issues we had to fix there
<m_3> koolhead17: hi
<SpamapS> m_3: I believe we've discussed having required packages be in the metadata at some point.
<koolhead17> hi m_3
<m_3> SpamapS: ok, that'd fix it
<kirkland> niemeyer: okay;  here's another approach
<kirkland> niemeyer: i have begun work on a byobu profile for tmux
<kirkland> niemeyer: which would duplicate byobu's keybindings and status notifications for tmux
<koolhead17> SpamapS: thanks :)
<kirkland> niemeyer: at your request, i took a look at tmux and I have to say there are some things to really like about it
<kirkland> niemeyer: not the least of which is the vibrant, active development community (as compared to screen)
<koolhead17> niemeyer: hey
<SpamapS> kim0: fixed the mediawiki formula btw
<kim0> SpamapS: great thanks
 * kim0 starts weekend .. @everyone enjoy
<koolhead17> kim0: :P
<SpamapS> kim0: cheers! thanks for the intense week!
<kirkland> niemeyer: if the current byobu/screen/run-one locking workaround is unacceptable to you, then I can generate a byobu profile for tmux
<m_3> SpamapS: we might need more than principia... maybe just examples or contrib?
<niemeyer> kirkland: Oh, sweet
<niemeyer> kirkland: Yeah, that would definitely be nice
<kirkland> niemeyer: the main thing I hated seeing was your embedding of tmux configuration into ensemble itself (ie, adding some helpful keybindings, etc.)
<m_3> kim0: later... enjoy
<niemeyer> kirkland: Heh :)
<kirkland> niemeyer: when ***so*** much of that effort has already been directed elsewhere
<SpamapS> m_3: the wordpress and mysql examples in ensemble itself are pretty straight forward..
<niemeyer> kirkland: Sorry, I don't see what you mean?
<SpamapS> m_3: as far as contrib .. I'm hoping that can just be a "component" of principia.. one that is still easy to find
<m_3> SpamapS: I mean there are some formulas to create (like mahout) that might not belong in the "Principia"
<niemeyer> kirkland: What you hated specifically, and what effort has been directed where?
<kirkland> niemeyer: your placement a user friendliness layer on top of tmux
<m_3> SpamapS: gotcha... it'd be useful for users if we drew a line... _these_ are sort of blessed formulas that will carefully be maintained
<niemeyer> kirkland: You're upset I made it user friendly?
<kirkland> niemeyer: when there's a somewhat common one elsewhere
<m_3> SpamapS:  and _those_ are just crap examples we're excited about
<kirkland> niemeyer: i can try to port that over to tmux
<kirkland> niemeyer: http://paste.ubuntu.com/628519/
<kirkland> niemeyer: that's what i have so far
<kirkland> niemeyer: meager start, but it's committed to byobu's head and in a micro release already
<SpamapS> m_3: The line I want to draw is whether or not the software that they deploy is from Ubuntu. The formulas, to be in principia, will all be as well maintained regardless of which component. At least, thats how I think it should work.
<kirkland> niemeyer: i can hack on it more this weekend
<niemeyer> kirkland: Sorry.. I'm completely missing your point.  You're upset because I copy & pasted my tmux.conf rather than using the one you made moments ago?
<kirkland> niemeyer: of course not
<SpamapS> m_3: mahout looks completely interesting as a formula. :)
<m_3> SpamapS: ok, sound good.  I was just thinking of all sorts of random stuff.. not all formulas are the same sort of base level service (like mysql)
<SpamapS> m_3: sure. I like the idea of creating a formula around a framework.
<m_3> SpamapS: I'm thinking we'll eventually have a web interface to manipulate services?  we'd have search, and categorizations by provides and requires
<m_3> SpamapS: like building blocks that get wired together
<SpamapS> m_3: there's talk of making that available as part of landscape (as in, for $$)
<SpamapS> though nobody would be precluded from doing that themselves
<m_3> SpamapS: hmmm.. fun stuff over Irish beer
<kirkland> SpamapS: reviewing txzookeeper
<kirkland> SpamapS: ./setup.py:    license="LGPL",
<kirkland> SpamapS: while debian/copyright says MIT for *, and BSD for one file
<kirkland> niemeyer: perhaps you can help
<kirkland> niemeyer: i'm reviewing txzookeeper for the archive
<kirkland> niemeyer: https://launchpad.net/txzookeeper says     GNU LGPL v3 
<kirkland> niemeyer: setup.py says license="LGPL"
<kirkland> niemeyer: debian/copyright says License: MIT
<kirkland> niemeyer: looks like a bug in debian/copyright, as far as I can tell
<niemeyer> kirkland: Isn't the debian/copyright talking about the license for things within debian/*?
<niemeyer> kirkland: txzookeeper is LGPL indeed
<kirkland> niemeyer: there should be at least 2 stanzas
<kirkland> niemeyer: one for the packaging (in debian/*)
<kirkland> niemeyer: but the first one should be for the code that is packaged
<kirkland> niemeyer: i've rejected it for now, sending a note to SpamapS
<kirkland> niemeyer: as soon as he fixes that, and re-uploads I'll accept it
<niemeyer> kirkland: Cool, thanks
<kirkland> niemeyer: fwiw, i just accepted ensemble into oneiric;  congratulations
<kirkland> niemeyer: i think it'll fail to install until we get txzookeeper too, though
<kirkland> niemeyer: hopefully we'll sort all of it out today
<niemeyer> kirkland: Woooo
<SpamapS> kirkland: yeah setup.py was probably cargo-culted in
<m_3> awesome... congrats team
<SpamapS> the only bit with an actual license text was the upstream debian/copyright
<SpamapS> kirkland: thanks, I think we need upstream (read: hazmat and niemeyer) to clarify licensing. :)
<kirkland> SpamapS: niemeyer: true;  on both projects, upstream MUST add a LICENSE file with the complete text of the license
<SpamapS> I think an upstream debian/copyright is authoritative, but it should be made clear in the files.
<kirkland> SpamapS: i was might have rejected ensemble on that basis, but I know it's trivial and niemeyer will fix it asap ;-)
 * niemeyer hides
<SpamapS> hazmat_: think you could ram some license clarification into txzookeeper ?
<hazmat_> SpamapS: sure what do you need... i think we had discussed gpl/lgpl.. but possibly apache if it went upstream.. niemeyer ?
<hazmat_> hmm but it has mit license in the copyright deb file
<SpamapS> hazmat_: well the point is, what is the license "today" ?
<hazmat_> SpamapS: lgpl at the moment
<niemeyer> hazmat_: Yes, it's LGPL, and we can move to whatever the ZooKeeper folks wish if we get it upstream
<hazmat_> we've got an okay to relicense it to apache when it heads upstream
 * negronjl is out to lunch
<kirkland> SpamapS: push another upload and i'll approve
<SpamapS> kirkland: w/o any evidence in the upstream bits?
<SpamapS> So, can we just add a COPYING file to the bzr tree with the LGPL, and reference to it in at *least* one .py file. Preferrably all that are non-trivial.
<SpamapS> I can open a bug and do the work myself if you guys prefer.
<niemeyer> SpamapS: Sounds good to me either way
<niemeyer> SpamapS: I can push a change, commit, or even add you to the project if you prefer that
 * negronjl is back
 * niemeyer waves
<negronjl> niemeyer:  are the formulas and scripts ( install, etc.) run as root ?
<niemeyer> negronjl: THey are
<hazmat_> niemeyer: so the overall plan for session events and connection errors, is to allow user defined callbacks (one for each category) to be defined on the connection. session events are by default ignored, connection errors by default errback on the deferred, if connection level callbacks are specified they'll take precedence.. the notion being we can define a connection error handler at the agent level, which can take care 
<niemeyer> hazmat_: In which way might we take care of the connection error?
<hazmat_> niemeyer: restarting the agent
<niemeyer> hazmat_: I see.. so aborting it all
<jimbaker`> hazmat, sounds like a reasonable policy - the provisioning agent certainly can be restarted and resynced w/ zk, this should be true of all of our agents
<hazmat_> niemeyer: yes, we could be in any arbitrary point in the code, dealing with any arbitrary state change
<niemeyer> hazmat_: Yes, exactly.. that's my concern
<hazmat_> niemeyer: till we have reconcilation and on disk state, we'll get duplicate hook execution for the joins, and potentially miss some changes.
<niemeyer> hazmat_: You mean it's a bomb? :)
<hazmat_> niemeyer: but i don't see how that's avoidable, its a networked system, and subject to disconnect at any time for any period.
<niemeyer> hazmat_: Error handling in the face of disconnections is a normal problem
<hazmat_> niemeyer: alternatives?
<niemeyer> hazmat_: Error handling?
<hazmat_> niemeyer: wrap connection eror handling around every zk interaction?
<niemeyer> hazmat_: Yes, handle errors they can happen, basically
<niemeyer> s/they/when they/
<niemeyer> hazmat_: Eventually we should have a "disconnected" hook
<hazmat_> niemeyer: these aren't normal api errors, their arbitrary connection errors, defining a central place for that logic seems like a win to me, if we want to recover without the spurious hook execution, we need consistent on disk state reflecting what the hooks have been made aware of, to reconcile to the current zk state after the connection is resumed.. 
<niemeyer> hazmat_: My concern is that by "central logic" you mean "crashing"
<niemeyer> hazmat_: The problem is that arbitrary connection errors are actually normal errors
<hazmat_> niemeyer: it doesn't have to be, but anything more intelligent needs resumable components (on disk reconcilliation to zk imo), but we can't assume the unit agent process is always alive or connected at all times, and right now we have volatile memory state that needs to be captured for any process restarts. alternatively though.. it could just reattempt to connect ad naseum... but then we have to determine if our session 
<hazmat_> niemeyer: their not the same imo.. they effect more than just the current api call, they can effect a slew of out of band/path things
<niemeyer> hazmat_: and that's normal!
<niemeyer> hazmat_: The fact we haven't been handling them as such is why it's hard now
 * hazmat_ ponders
<hazmat_> niemeyer: the problem of watchers and ephemeral nodes re-establishment, isn't a local concern, how would local error handling be able to compensate for a session expiry
<niemeyer> hazmat_: It's not about compensating.. it's about handling
<niemeyer> hazmat_: If error, then log problem, wait for reconnection, try agian, whatever
<hazmat_> niemeyer: ok.. how would it handle a session expiration
<hazmat_> niemeyer: it can do that locally, but the global application state also needs restoration
<niemeyer> hazmat_: Assuming that the best thing to do in every single path of the application is to crash in case a disconnection happens isn't a good approach IMO
<hazmat_> all of the extant watchers and ephemeral nodes
<niemeyer> hazmat_: Yes, all of that has to be considered..
<niemeyer> hazmat_: Or, we change the model completely
<hazmat_> niemeyer: such as?
<niemeyer> hazmat_: Such as a pull model..
<niemeyer> hazmat_: Get the state and diff
<hazmat_> niemeyer: bingo.. i think that's where we need to go, i think the global recovery can evolve from crash to effecting state diff reconcilliation
<niemeyer> hazmat_: That's not the same thing
<niemeyer> hazmat_: This feels like doing things the reversed way
<hazmat_> the individual components do the state diff reconcilliation
<niemeyer> hazmat_: We have a bunch of things running we have no idea, then we stop everything that was running because we don't know how to handle errors, and then try everything again
<hazmat_> be it service config, or relation watching, etc
<niemeyer> hazmat_: The forward approach is: we always know the next step
<hazmat_> we stop and reconcile because none of those components are functional if we don't
<niemeyer> hazmat_: Because they ignore errors!
<kirkland> SpamapS: i'm about to knock off for a bit;  did you get around to fix that txzookeeper copyright issue?
<kirkland> SpamapS: ensemble is *almost* installable from the archive ;-)
<hazmat_> niemeyer: hmm.. so alternatively wrapping communications/interactions into components/protocols and rebroadcasting connection failures (ala pub/sub) to all of them for their individual error handling after the connection has been restablished
<hazmat_> niemeyer: i think that's still compatible with a first step of a global connection error handler.
<niemeyer> hazmat_: Yes,  broadcasting errors in case they happen.. I don't see why we need other abstraction layers, though
<niemeyer> hazmat_: It feels incompatible..
<niemeyer> hazmat_: If we crash on errors, when do we start fixing things so that they handle errors?  They won't receive the error because it's crashing
<hazmat_> a global error handler can implement whatever logic we want, be it rebroadcasting a local exception to all components, etc
<hazmat_> niemeyer: a crash/restart error handler would just be a first step, its an incremental solution
<niemeyer> hazmat_: It's not incremental.. it's stopping the application completely.. we can't implement incremental error handling if the first step is to crash the application entirely.
<hazmat_> the first step is to reconnect the application
<hazmat_> the application is effectively dead without that
<niemeyer> hazmat_: Ok, you're right.  Let's move forward with the full reconnection then.
<hazmat_> niemeyer: i definitely think we can evolve it to better things in the future
<niemeyer> hazmat_: Let's keep it as simple as possible.. no additional abstraction layers
<hazmat_> niemeyer: sounds good to me
<niemeyer> hazmat_: Let's have something that at least is able to recover itself automatically
<hazmat_> niemeyer: recover itself?
<hazmat_> niemeyer: could you be more specific?
<niemeyer> hazmat_: Yeah, restart, reconnect, re-setup
<hazmat_> ah.. yeah
<niemeyer> hazmat_: and let's work towards making that a rare event as much as possible
<niemeyer> hazmat_: Increased timeouts, then resilient zookeeper nodes, etc
<hazmat_> niemeyer: yeah.. the migration is pretty transparent and fast for failovers
<hazmat_> i've been running cluster tests the last few days
<niemeyer> hazmat_: Ensemble 2.0 can then take everything we learned and be better at that.
<niemeyer> hazmat_: Oh, sweet
<_mup_> ensemble/debug-log-relation-settings-changes r260 committed by jim.baker@canonical.com
<_mup_> Used namedtuple
<niemeyer> Will get dinner
#ubuntu-ensemble 2011-06-18
<SpamapS> kirkland: thanks, no, I will have a new upload on Monday.
<kirkland> SpamapS: okey
<SpamapS> niemeyer: For the copyright thing, I'm almost done with the branch to add copyright/license to all relevant files.
<_mup_> ensemble/debug-log-relation-settings-changes r261 committed by jim.baker@canonical.com
<_mup_> Fixed broken test_spawn_cli_set_can_delete, wasn't testing what it claimed to be testing
<_mup_> ensemble/debug-log-relation-settings-changes r262 committed by jim.baker@canonical.com
<_mup_> Fix docstring
<_mup_> ensemble/debug-log-relation-settings-changes r263 committed by jim.baker@canonical.com
<_mup_> Verify that YAMLState.write returns a list of all changes
<_mup_> ensemble/debug-log-relation-settings-changes r264 committed by jim.baker@canonical.com
<_mup_> Provide guaranteed ordering on change items for YAMLState
<_mup_> ensemble/debug-log-relation-settings-changes r265 committed by jim.baker@canonical.com
<_mup_> Follow Python naming convention, AnItem, not AnEntry for dict items
 * hazmat_ yawns
<_mup_> txzookeeper/swap-sync-errors-to-failures r28 committed by kapil.foss@gmail.com
<_mup_> merge trunk
<_mup_> txzookeeper/session-event-handling r45 committed by kapil.foss@gmail.com
<_mup_> - Allow for the ZK cluster to be reset and destroyed.
<_mup_> - Session tests using a ZK cluster as a test layer/test resource, cluster state
<_mup_>   is reset between tests.
<_mup_> - Zookeeper Client session test for server rotation across multi-node cluster.
<_mup_> - ClientEvent repr now includes pretty name for connection state.
<_mup_> - Client server rotation and session/watch migration tests.
<_mup_> - Session event handling, sent to user defined callback, else ignored by default.
<_mup_> - Connection loss handling, sent to user defined callback, else raised at the
<_mup_>   API call point. The callback result if any is returned as the error to be
<_mup_>   returned to the API.
