[09:19] <koolhead17> hi all
[09:56] <rog> koolhead17: hi
[09:56] <koolhead17> hi rog
[13:16] <niemeyer> Good morning charmers and jujuers
[13:17] <koolhead17> hi niemeyer
[13:29] <rog> hey gustavo!
[13:32] <niemeyer> koolhead17, rog: Yo
[13:33] <rog> niemeyer: the gozk server-starting code is working, BTW. i'm just making the test code use it.
[13:43] <niemeyer> rog: Woohay!
[13:46] <rog> things are always easier when you're adapting some other code. for the difficult bits you do the same and blame the other code; for the rest you can simplify according to taste.
[13:46] <niemeyer> rog: :-)
[13:49] <rog> niemeyer: BTW in gocheck, is it ok if C.Log is called concurrently?
[13:51] <niemeyer> rog: It is
[13:51] <niemeyer> rog: I actually intend to support parallel tests as well
[13:52] <niemeyer> rog: Well.. hmm.. I'm not entirely sure about that right now
[13:52] <niemeyer> rog: It can be called concurrently across multiple tests, but I don't recall if it's atomic, which is likely your actual question
[13:53] <rog> i'd just come to the opposite conclusion, yeah
[13:53] <rog> i wanted to use it for the logging output of the server.
[13:53] <rog> rather than sending to the rather obscure stdout.txt
[13:54] <niemeyer> rog: I've used it that way often.. there's already a trivial logger wrapper in the suite
[13:54] <niemeyer> rog: Hmm.. maybe not in that one
[13:54] <niemeyer> rog: Hold on
[13:56] <niemeyer> rog: http://paste.ubuntu.com/695648/
[13:56] <niemeyer> rog: I use this in a few suites already
[13:56] <niemeyer> rog: E.g. mgo.SetLogger((*cLogger)(c))
[13:57] <rog> erm, that doesn't help the atomicity requirement
[13:57] <rog> AFAICS
[13:57] <niemeyer> rog: Sure.. this should be done within gocheck
[13:58] <niemeyer> rog: But that's a red herring right now
[13:58] <niemeyer> rog: Nothing will explode if lines are logged concurrently
[13:58] <rog> it might, because bytes.Buffer won't like it
[13:58] <niemeyer> rog: Heh
[13:59] <niemeyer> rog: Just move on.. I'll address this in gocheck later
[13:59] <rog> ok, i'll put a lock outside
[13:59] <niemeyer> rog: Man.. ignore the problem until you have it.. :)
[14:00] <niemeyer> rog: gocheck can easily be fixed
[14:00]  * rog prefers to fix concurrency issues before they heisenbug
[14:01] <rog> but actually we should be ok with GOMAXPROCS=1
[14:01] <niemeyer> rog: I'm telling you I'll fix the problem in gocheck. You don't have a problem.
[14:02] <rog> ok, cool
[14:22] <niemeyer> hazmat: Just +1d machine-with-type, with a couple of trivial suggestions.. let me know what you think please
[14:25] <niemeyer> jimbaker: ping
[14:28]  * rog has some failing tests now, hurray!
[14:34] <niemeyer> rog: Hah :-)
[14:34] <niemeyer> rog: Good stuff
[14:44] <rog> niemeyer: the test in TestRetryChangeFailsSetting is failing because RetryChange is returning a nil error. why should RetryChange be failing here?
[14:44] <rog> (not yet familiar with zk retry semantics i'm afraid
[14:44] <niemeyer> rog: Have you changed it?
[14:45] <niemeyer> rog: The documentation for the function is pretty good,b tw
[14:50] <rog> no, i haven't changed it AFAIK. and i just noticed the read-only thing going on above. hmm.
[14:51] <rog> (that's one of only two tests that are failing now)
[14:52] <niemeyer> rog: Yeah, but I suspect this is related to your changes in the error handling
[14:52] <niemeyer> rog: Have you introduce those already?
[14:52] <niemeyer> rog: How is the test failing?
[14:55] <rog> i think you're right. RetryChanges is succeeding inappropriately
[14:57] <rog> the only other test that fails is TestExists, where Exists of "/zookeeper" is returning 0 children, not 1 as expected.
[14:57] <jimbaker> niemeyer, hi
[14:57] <avoine> hello
[14:59] <niemeyer> avoine, jimbaker: Heys!
[14:59] <niemeyer> rog: That could mean a distinction in the zk versions.. perhaps
[15:00] <niemeyer> rog: /zookeeper is a default node, so it's not a great assertion
[15:00] <niemeyer> jimbaker: Hey!
[15:00] <niemeyer> jimbaker: How're things going there?
[15:00] <jimbaker> correct about /zookeeper, it's a very modest ping
[15:01] <jimbaker> niemeyer, my daughter just turned 11, which is very cool
[15:01] <rog> ok
[15:02] <jimbaker> rog, if you are looking for a node that says something about juju, /topology or /initialized is probably what you want
[15:03] <niemeyer> jimbaker: That's very cool indeed
[15:03] <niemeyer> jimbaker: No, ECONTEXT :)
[15:03] <niemeyer> jimbaker: He's working on gozk
[15:03] <jimbaker> ah, one level down
[15:07] <rog> ah, i did change RetryChange quite a bit, must have got my code transformation wrong...
[15:09] <niemeyer> rog: Hmm..
[15:10] <niemeyer> rog: I hope you're planning to provide several independent merge proposals..
[15:10] <rog> niemeyer: gotcha!
[15:11] <rog> niemeyer: i'm hoping that it's fairly obvious
[15:11] <niemeyer> rog: Yeah, but several obvious things make up for a gigantic branch and several bugs
[15:11] <rog> yeah, i should probably separate out the server stuff
[15:12] <rog> but i couldn't test the other stuff until i'd put it in...
[15:12] <niemeyer> rog: I wasn't expecting the things we talked about to break RetryChange, for instance
[15:12] <rog> the error stuff changed things quite a bit
[15:12] <rog> for instance, error comparison became simpler
[15:12] <niemeyer> rog: Still, it sounds relatively trivial..
[15:12] <rog> which allowed RetryChanges to become simpler
[15:13] <niemeyer> rog: Cool.. simpler is great
[15:14] <rog> 6 lines shorter
[15:14] <rog> (and easier to understand, i'd contend, but YMMV :-])
[15:15] <rog> anyway, i've found the bug
[15:15] <rog> (i think!)
[15:17] <niemeyer> and broken!
[15:17]  * niemeyer hides
[15:18] <niemeyer> jimbaker: So, what have you been working on?
[15:18] <rog> return nil -> return err
[15:18] <rog> that was the bug
[15:18] <niemeyer> rog: Was joking :)
[15:18] <rog> anyway, the only failing test now is the number of nodes under /zookeeper
[15:18] <rog> :-)
[15:18] <rog> have you got an alternative suggestion for a test?
[15:19] <niemeyer> rog: I'm pretty certain this is  bad expectation from the test
[15:19] <niemeyer> rog: Let's set up a node of our own
[15:19] <rog> yeah
[15:19] <rog> i was gonna suggest that
[15:19] <niemeyer> rog: Create /node and /node/subnode
[15:19] <niemeyer> rog: and check that instead
[15:19] <jimbaker> niemeyer, i've been looking at the lxc work and trying to get that to work for me so i can review some of the branches in queue. i've also been finalizing env-origin. i really need to get the unittests fix in as well
[15:20] <niemeyer> jimbaker: Ok, great.. we don't have a lot of time as you know, so please leave the lxc stuff for hazmat and bcsaller and focus on the things on your plate at the moment
[15:20] <jimbaker> niemeyer, ok
[15:20] <rog> niemeyer: to be honest, NumChildren is checked elsewhere. we could just remove that assert from the TestExists test and all would be ok.
[15:21] <niemeyer> rog: Works too
[15:24] <rog> passes all tests. woo. time for lunch.
[15:25]  * hazmat wakes up and grabs a coffee and finishes review on fwereade series branch
[15:25] <niemeyer> hazmat: Morning!
[15:25] <hazmat> jimbaker, if you could the fix help output for set config to be  a onliner when listing commands that would be helpful
[15:26] <hazmat> its currently the only one that spews on juju --help.. if you switch it to a the other help value it only shows on juju set --help
[15:26] <jimbaker> hazmat, ok
[15:27] <hazmat> jimbaker, i'm curious what issues you had with the lxc stuff though, did you get a traceback?
[15:27] <fwereade> cheers hazmat :)
[15:27] <hazmat> the tests for it should run if you have libvirt-bin and lxc running
[15:27] <hazmat> s/running/installed
[15:29] <jimbaker> hazmat, sure, one moment
[15:30] <niemeyer> Lunch here.. biab
[15:36] <jimbaker> hazmat, here's the result from running one test - http://pastebin.ubuntu.com/695695/
[15:37] <jimbaker> hazmat, originally i was trying this against bcsaller's branch for using apt-cacher-ng, but i then decided to step back and verify against trunk. i am running oneiric, beta 2
[15:38] <jimbaker> to my eye, this sort of spew is basically a simple setup problem
[15:38] <fwereade> hazmat: re [3], good point
[15:39] <fwereade> hazmat: I guess it's not too much work to add a new series codename to the list every 6 months :)
[15:39] <fwereade> hazmat: I remember that from the first review now, sorry I missed it before
[15:39] <jimbaker> hazmat, also are there any docs on working w/ lxc?
[15:45] <fwereade> hazmat: it does seem to me that the right point to validate the series is at environment load time, though, given that it's an environment property
[15:47] <fwereade> hazmat: ...and I've suddenly changed my mind, mainly because I like using made-up series names in tests
[15:49] <fwereade> hazmat: I think it's a matter of ensuring we wrap the errors nicely, to make it clear what the problem is, rather than restricting the acceptable strings and having to change them ourselves (even if it is only every 6 months)
[15:51] <rog> niemeyer: i've sent the branch to you for review... i think. still getting to grips with the bzr/lp stuff.
[15:51] <bcsaller> jimbaker: it looks like its trying to install the distro package of juju. I don't think its in the distro yet, right? That should be set to ppa for now I expect
[15:53] <jimbaker> bcsaller, what do we need to do to config then for this? i'm just running the test against trunk
[15:54] <bcsaller> jimbaker: juju/lib/lxc/tests/test_lxc.py   s/distro/ppa should only come up one place
[15:56] <hazmat> fwereade, that sounds good, but its not clear how for example orchestra could validate that value
[15:57] <hazmat> fwereade, for lxc the error from incorrect value is coming from a script that's also setting up the container, and is pretty hard to distinguish apriori from other installation errors
[15:57] <fwereade> hazmat: well, I think it comes down to whether or not cobbler has images with the appropriate codename available
[15:57] <fwereade> hazmat: and that's something we can check
[15:57] <fwereade> hazmat: (er, in theory, we don't do anything like that yet: it's oneiric or nothing)
[15:58] <hazmat> fwereade, i guess the question can we unambigously report such an error, or do we need to provide a list
[15:59] <jimbaker>  bcsaller, does this mean that lxc testing currently has a dependency on juju being installed as a ppa? ie what is tested in the container is something outside of the branch and thus similar to what we see when executing on say ec2?
[15:59] <fwereade> hazmat: I think we can, if nothing else because the supported-series list is a property of the cobbler setup
[16:00] <bcsaller> jimbaker: there is an origin policy like what is being worked on elsewhere. distro, ppa and branch, where does the container gets its version of juju? it defaults to distro on trunk but the package isn't in the distro (though it should be after today?)
[16:01] <bcsaller> that was the only failure there though
[16:01] <fwereade> hazmat: us having a list including (say) puissant doesn't help if the adminstrator hasn't made images available
[16:01] <bcsaller> honestly the lxc-library branch does a bit more in the container and is more worth checking out than whats in trunk wrt lxc
[16:01] <fwereade> hazmat: so I actually think we have to go by what we can get from the environment
[16:02] <jimbaker> bcsaller, ok, i just wanted to follow up on what i was seeing. it now makes perfect sense and is even related to what i'm trying to finish up in terms of juju-origin
[16:02] <jimbaker> (env-origin branch)
[16:03] <bcsaller> jimbaker: yeah, the lxc stuff supports that in a very similar way to whats happening elsewhere. Honestly I hope we can just have cloudinit run the juju-create script as root in the container
[16:03] <jimbaker> bcsaller, seems like the right place to do it
[16:04] <bcsaller> jimbaker: its written in such a way that it sources its config from /etc/juju/juju.conf which can be written by any of the providers
[16:04] <bcsaller> and then we'd have common image building code
[16:15] <hazmat> fwereade, good point, sounds good
[16:16] <fwereade> hazmat: somewhat academic until we can select series in orchestra at all, ofc :)
[16:40] <jimbaker> hazmat, bcsaller - take a look at this lp:~jimbaker/juju/set-help-trivial - i tried to preserve the original text + some copy editing. but i think it could still stand some improvements. for example, we probably want concrete examples (or none at all)
[16:41] <jimbaker> hazmat, bcsaller, in particular, it distinguishes between the longer help message for doing juju set --help, vs the summary help with juju --help
[16:43] <bcsaller> jimbaker: agree that we need a better pattern for this
[16:43] <jimbaker> also the spacing is currently inconsistent. probably best to use textwrap to control, but manually. this is only necessary if we want a list of examples, otherwise we don't need RawDescriptionHelpFormatter, which is all-or-nothing for the subparser
[16:44] <jimbaker> bcsaller, agreed on that!
[16:53] <SpamapS> Gooooood morning juju team
[16:53] <SpamapS> Done yet? ;)
[16:56] <SpamapS> hazmat: is bug 829880 still open? I thought we'd worked around it somehow.
[16:56] <_mup_> Bug #829880: object store doesn't like key with '/'  <juju:Triaged by hazmat> <OpenStack Compute (nova):Confirmed> <ensemble (Ubuntu):New> < https://launchpad.net/bugs/829880 >
[17:12] <hazmat> SpamapS, that should be closed
[17:13] <hazmat> SpamapS, its not a bug against ensemble anymore, its still valid for openstack afaik
[17:13] <SpamapS> hazmat: right so Fix Released or Invalid ?
[17:14] <hazmat> SpamapS, just marked fixed release against juju, for the  ensemble package...
[17:15] <koolhead17> ./
[17:19] <niemeyer> rog: ping
[17:21] <niemeyer> rog: Review for gozk delivered
[17:21] <niemeyer> biab
[17:22] <koolhead17> SpamapS: hello
[17:24] <rog> niemeyer: where do i find your review?
[17:24] <jimbaker> hazmat, niemeyer - i would like to verify that we want the user to be able to specify an arbitrary PPA? just wanted to understand comments re the logic being preferred in the trunk version of cloudinit
[17:26] <rog> niemeyer: ah, slow mail fetch
[17:26] <rog> thanks
[17:34] <SpamapS> koolhead17: hey
[17:35] <SpamapS> koolhead17: I noticed you applied for charmers.. but you haven't submitted any charms.
[17:36] <koolhead17> SpamapS: i did one dotproject
[17:37] <koolhead17> but i don`t think its of much use as its not getting done from the oneiric repo
[17:37] <SpamapS> koolhead17: You need to submit a 'new-charm' bug with it so we can review it. :)
[17:38] <koolhead17> SpamapS: yes sir!! :D
[17:38] <koolhead17> SpamapS: https://bugs.launchpad.net/debian/+source/dbconfig-common/+bug/807038
[17:39] <_mup_> Bug #807038: dbconfig-common fails to preseed phpmyadmin on natty/lucid <dbconfig-common> <phpmyadmin> <dbconfig-common (Ubuntu):New> <dbconfig-common (Debian):New> < https://launchpad.net/bugs/807038 >
[17:39] <koolhead17> i want some one to shpw little love to this :D
[17:39] <koolhead17> *show
[17:40] <SpamapS> koolhead17: yeah, dbconfig-common is a little bit weird.. :-/
[17:40] <koolhead17> SpamapS: that is one reason am skipping  our repository :D
[17:41] <koolhead17> i want some bugs which does not require dbconfig-common
[17:41] <koolhead17> *charms in wishlist
[17:46] <SpamapS> koolhead17: huh? you can just write your own config file, you don't have to use dbconfig-common's values
[17:47] <koolhead17> SpamapS: ooh!! i was dumb all this while then. will dig up on it later once am home. thanks for that. :)
[17:48] <SpamapS> koolhead17: your dotproject charm looks cool tho
[17:49]  * koolhead17 pats himself for few sec :D
[17:49] <niemeyer> jimbaker: That's not critical right now
[17:49] <SpamapS> koolhead17: https://launchpad.net/charm/+filebug .. create a 'new-charm' bug and I'll take a look. I usually don't grant membership in charmers until somebody has contributed something significant.. that charm would count. :)
[17:50] <koolhead17> SpamapS: so can i call myself a charmer now!! :D
[17:54] <koolhead17> SpamapS: thanks. i will add few more in the list !! :D
[17:54] <SpamapS> koolhead17: you can call yourself sally if you want. We'll call you a charmer when you have a charm in the collection. :)
[17:55] <SpamapS> koolhead17: and btw, thanks for the participation and enthusiasm
[17:55]  * koolhead17 bows to SpamapS 
[17:55] <koolhead17> ok catch you guys from home
[17:55] <jimbaker> niemeyer, you mean, we only need to support the standard juju ppa at this time, correct?
[17:58] <niemeyer> jimbaker: Right.. that's the problem we have to solve right now. distro/ppa/branch
[17:58] <jimbaker> niemeyer, sounds good
[17:59] <niemeyer> rog: The numbering of reviews is a convention we follow for a while so that we can exchange ideas about points
[17:59] <niemeyer> rog: "Ah, what about [3] on bar"
[18:00] <rog> ah ok.
[18:00] <rog> i do miss rietveldt. i was toying with the idea of using the bzr plugin to that.
[18:01] <rog> then i thought, no, get used to it!
[18:02] <niemeyer> rog: Yeah, .. well, kind of..
[18:02] <niemeyer> rog: There's a third option: helping to convince the LP folks to improve it :)
[18:03] <rog> niemeyer: from what you were saying, it sounds like they've got something in the works
[18:03] <niemeyer> rog: Yeah, there's significant UI changes coming
[18:03] <niemeyer> rog: Not sure about reviews, specifically
[18:03] <niemeyer> rog: But once these changes land, we'll be in a better position to request some improvements on the area
[18:04] <rog> the reviews are really important. i think the Go community would not be nearly so involved without the codereview site
[18:04] <niemeyer> rog: LP is developed within LP as well, so it's not like they'll resist the idea of making it better :)
[18:04] <niemeyer> rog: Agreed
[18:10] <rog> niemeyer: they might not be into the whole code review thing though
[18:10] <SpamapS> is rietveldt a review tool?
[18:10] <SpamapS> never heard of it
[18:11] <niemeyer> rog: Every branch in Canonical going into Launchpad, Landscape, U1, juju, etc.. is reviewed before merging, and it's been like that since the beginning of times :)
[18:12] <niemeyer> rog: Well before rietveldt even existed
[18:12] <SpamapS> btw, whats the word on the 'ensemble:formula' in charms? if its not there, will anything break?
[18:13] <niemeyer> SpamapS: No
[18:13] <rog> SpamapS: see codereview.appspot.com
[18:13] <niemeyer> SpamapS: juju removes the need for headers
[18:13] <SpamapS> niemeyer: cool.. going to do a mass update on charm once juju is uploaded.
[18:14] <SpamapS> niemeyer: what about $ENSEMBLE_REMOTE_UNIT ?
[18:14] <niemeyer> SpamapS: JUJU_*
[18:14] <niemeyer> SpamapS: Please see the email I sent to the list.. it details all the compatible and incompatible changes
[18:14] <niemeyer> SpamapS: and it's a nice guide for the mass update
[18:15] <SpamapS> Oh I must have missed the full details
[18:15] <SpamapS> Been a few long days since then :p
[18:16] <rog> niemeyer: just addressed the code review points
[18:16] <rog> (and replied)
[18:17] <rog> (i'm presuming that replying to the sender address also sends an email to you)
[18:17] <niemeyer> rog: Yeah
[18:17] <niemeyer> rog: Thanks, will check it out
[18:17] <rog> i'm done for the day. see ya monday.
[18:24] <_mup_> juju/machine-with-type r366 committed by kapil.thangavelu@canonical.com
[18:24] <_mup_> address review comments, use hypens for settings key values
[18:24] <niemeyer> rog: Have a good weekend!
[18:24] <rog> niemeyer: will do. mmm curry.
[18:27] <_mup_> juju/trunk r358 committed by kapil.thangavelu@canonical.com
[18:27] <_mup_> merge machine-with-type [r=niemeyer][f=854230]
[18:27] <_mup_> An environment now stores the machine provider-type as a settings property.
[18:38] <hazmat> ugh.. i broke a test on trunk
[18:40] <SpamapS> thats the developer equivilent of a cheerleader dropping the spirit stick
[18:43] <niemeyer> LOL
[18:48] <hazmat> niemeyer, we still need the lxc-library-clone to get reviewed and merged for the local dev stuff to work fully
[18:48] <hazmat> bcsaller, any updates on it?
[18:48] <niemeyer> hazmat: Cool, that's at the top of my review pipeline right now actually
[18:49] <bcsaller> hazmat: I think its working where it is, wasn't sure from the conversation here if you were ok with the changes to __init__ and clone yet though
[18:51] <hazmat> bcsaller, oh i didn't see your followup.. having  a look now
[19:13] <AlanBell> hi all
[19:14] <koolhead17> hello AlanBell
[19:15] <AlanBell> so if I set up an application like the wordpress example using juju over several instances, say one database server and 3 web servers doing the PHP stuff, the example ends up with a plain unthemed wordpress. What happens when I want to install a theme?
[19:15] <AlanBell> do I have to do that on all the web servers manually and manage the sync of that myself?
[19:15] <kim0> AlanBell: nope
[19:15] <kim0> AlanBell: you should relate to a shared storage service (such as NFS)
[19:15] <kim0> which will handle that for you
[19:16] <kim0> assuming the wordpress formula is written to use that
[19:16] <kim0> there are other shared storage services .. ceph, gluster ...etc which I hope someone will charm soon :)
[19:17] <AlanBell> ok
[19:17] <AlanBell> and wordpress is about the least appropriate example I can think of to run on juju, but it serves as a technical example!
[19:18] <kim0> yeah! it's actually not that bad :)
[19:18] <kim0> when you get slashdotted .. you'll want to scale it up :)
[19:18] <koolhead17> kim0: we had drupal one somewhere
[19:18] <koolhead17> as well
[19:18] <AlanBell> meh, been slashdotted with wordpress running on a smallish vps
[19:19] <AlanBell> install supercache, slashdot away at will
[19:19] <kim0> woot .. slashdot must be loosing in popularity :)
[19:19] <AlanBell> I have also been knocked over by slashdot in the past :)
[19:19] <AlanBell> but it only lasts a day anyway
[19:22] <AlanBell> mediawiki is a slightly more realistic example, but things don't start out needing a multi-server architecture
[19:23] <AlanBell> you would have a single server mediawiki that has grown over time and now has thousands of users and the server is beginning to creak a bit
[19:23] <AlanBell> then you want to split out the database and add multiple web servers etc
[19:24] <kim0> AlanBell: yeah .. the idea is about the management model of services and how they relate to each other
[19:24] <kim0> how they all collaborate towards an end state
[19:24] <kim0> that's the beauty
[19:24] <kim0> having one service tied to being one ec2 node, is only a current limitation
[19:25] <kim0> once we have LXC working (super soon) .. my understanding is that we'll be able to run multiple services on one box
[19:25] <AlanBell> so you could do a scaleable architecture starting on a single node?
[19:25] <kim0> which could even be your laptop
[19:25] <kim0> yep
[19:25] <AlanBell> that would make sense
[19:26] <AlanBell> otherwise include a migration story
[19:26] <kim0> hazmat: any idea when the local dev (lxc) story is landing please
[19:29] <hazmat> kim0, as soon as possible, but no sooner ;-) .. more seriously we've got a few branches in flight that should land hopefully today.. that will setup the basics, but there's some additional work to make things work across all the ensemble commands (ssh, debug-hooks) which won't land till next week realistically
[19:30] <kim0> hazmat: you said the e word
[19:30] <hazmat> mostly it relates to the lxc containers (service units) will have distinct ip addresses that need to be reported back to the end user
[19:30] <hazmat> kim0, i like the e word ;-)
[19:30] <kim0> haha :) thanks a lot
[19:30]  * hazmat wonders if he can script an auto replace for that into his irc client
[19:31] <kim0> juju
[19:31] <kim0> :)
[19:31] <koolhead17> kim0: charms :P
[19:31] <koolhead17> hazmat: hey
[19:32] <hazmat> koolhead17, hi
[19:38] <koolhead17> hazmat: how have you been?
[19:57] <niemeyer> hazmat, bcsaller: Review delivered for https://code.launchpad.net/~bcsaller/juju/lxc-library-clone/+merge/75833
[19:57] <hazmat> koolhead17, busy
[19:57] <hazmat> koolhead17, looking forward to going home this weekend, its been a long travel the last few weeks
[19:58] <bcsaller> niemeyer: thanks
[19:58] <niemeyer> bcsaller: Please let me know if you'd like to talk online about any of the points there
[19:58] <_mup_> juju/config-juju-origin r357 committed by jim.baker@canonical.com
[19:58] <_mup_> Initial commit
[19:58] <niemeyer> bcsaller: Will be happy to help getting this in faster in any way possible
[19:59] <koolhead17> hazmat: awesome.  :D
[20:32] <_mup_> juju/local-unit-deploy r387 committed by kapil.thangavelu@canonical.com
[20:32] <_mup_> address review comments, switch to explicit passing of unit namespace to the unit container deployment
[20:33] <niemeyer> hazmat: ping
[20:33] <hazmat> niemeyer, pong.. almost done getting local-unit-deploy ready for another look
[20:34] <cole> what's up jujuers
[20:34] <cole> question!
[20:35] <niemeyer> hazmat: Just answering your points on the merge proposal instead, for refernece
[20:35] <hazmat> niemeyer, cool, thanks
[20:36] <cole> what's the current thinking re dynamic scaling with juju?
[20:37] <niemeyer> hazmat: Please see if it makes sense: https://code.launchpad.net/~niemeyer/juju/go-formula-bundle/+merge/75247
[20:37] <niemeyer> cole: juju add-unit FTW
[20:37] <niemeyer> cole: The whole system architecture was designed with that kind of use case in mind
[20:38] <niemeyer> cole: So it's pretty natural to do it, and we'll be adding more features as we go
[20:39] <cole> niemeyer: cool, one thing we are thinking of is extending m-collective to be able to talk to ensemble.
[20:39] <niemeyer> cole: Nice!
[20:39] <cole> but if there are plans to introduce that functionality…we won't bother.
[20:40] <cole> niemeyer: if that does sound interesting to you I'd be happy to set something up with Teyo.
[20:40] <hazmat> niemeyer, re the comment, so its not clear to me why we need a method to read the whole bundle into memory, if we just expose the file path.. the consumer of the bundle can do whatever is appropriate to its usage
[20:41] <niemeyer> cole: It depends on what's the functionality you're really looking for
[20:41] <niemeyer> hazmat: What file path?  If I have a bundle in memory I have no file path
[20:42] <hazmat> niemeyer, why would you have a bundle only in memory?
[20:42] <niemeyer> hazmat: Also, what you say is not entirely accurate
[20:42] <cole> niemeyer: basically let m-collective gather stats on a host and determine threshold and then (since m-collective has no knowledge of relationships) ask ensemble to go scale appropriately.
[20:42] <niemeyer> hazmat: This is not a method to read the whole bundle into memory
[20:42] <niemeyer> hazmat: This is a method that takes a sequence of bytes that _are_ in memory, and interprets them as a bundle
[20:42] <SpamapS> cole: I thought mcollective was just a message bus + some agents
[20:43] <niemeyer> hazmat: Because, e.g., it was read from a database
[20:43] <cole> SpamapS: it has the concept of gathering load / performance data.
[20:43] <niemeyer> cole: Yeah, I'd say don't bother
[20:43] <hazmat> cole, we'll need to expose an api endpoint on juju to make that sort of use case easier
[20:43] <niemeyer> cole: We'll eventually have that internally
[20:44] <cole> niemeyer:that's the answer I was hoping for!
[20:44] <niemeyer> cole: Unless you're desperate, of course
[20:44] <niemeyer> :)
[20:45] <cole> niemeyer: auto-scaling is a HUGE customer requirement and our default answer is always juju without telling them the auto part of scaling doesn't exist yet ;)
[20:45] <SpamapS> the auto part is impossible to define for any one workload
[20:45] <niemeyer> cole: Well, you're not saying something entirely off actually, even with the current state
[20:46] <SpamapS> What you need is an intelligent rules engine.
[20:46] <niemeyer> cole: The way that the system works right now actually does quite a lot of "auto" things that other systems can't do
[20:46] <SpamapS> good point..
[20:46] <niemeyer> cole: E.g. "juju add-unit appserver"
[20:46] <niemeyer> cole: That's really all it takes..
[20:46] <SpamapS> its the responsiveness thats missing, but the setting up and configuring things, that it already does.
[20:48] <SpamapS> cole: right now you could simply use regular graphing/monitoring and when appserver load is too high add-unit .. and when its too low, remove-unit ..
[20:49] <hazmat> niemeyer, the local-unit-deployment is ready for another look
[20:49] <niemeyer> hazmat: WOot
[20:52] <hazmat> niemeyer,  back to formula-bundle-dir.. ah.. i understand the ReadAt method now.. that is kinda of cool.. re the ReadBundleBytes, ic re the use case
[20:53] <niemeyer> hazmat: Yeah, that's the orthogonality between types and methods that feels so nicely done there
[20:54] <niemeyer> hazmat: Thanks for the review.. the bits stuff was great catch there
[20:54] <niemeyer> hazmat: Will have a look at this over the weekend
[20:54] <hazmat> niemeyer, cool
[20:59] <_mup_> juju/machine-with-type-merge r378 committed by kapil.thangavelu@canonical.com
[20:59] <_mup_> merge machine with type
[21:00] <_mup_> juju/provider-determines-placement r390 committed by kapil.thangavelu@canonical.com
[21:00] <_mup_> merge machine-with-type and resolve conflict
[21:01] <_mup_> juju/unit-with-private-ip r391 committed by kapil.thangavelu@canonical.com
[21:01] <_mup_> merge pipeline and resolve conflicts
[21:03] <_mup_> juju/local-unit-deploy r389 committed by kapil.thangavelu@canonical.com
[21:03] <_mup_> update initialization test to note that settings are now present
[21:09] <niemeyer> hazmat: So, units are under the namespace.. but what about everything else?
[21:09] <jimbaker> hazmat, re juju-origin, i have a rewritten version (in lp:~jimbaker/juju/config-juju-origin). however, one question i have is whether we should have the default origin be _DISTRO yet (using the names in juju.providers.common.cloudinit). right now, that wouldn't work
[21:09] <niemeyer> hazmat: e.g. zookeeper, log files, ...
[21:10] <niemeyer> jimbaker: Yes, it should be distro.. we have to release this code into Ubuntu.
[21:10] <hazmat> niemeyer, each env has its own zookeeper running on the host on a random port, the user can configure the storage for log, pids, via a config option data-dir in the environments.yaml
[21:11] <niemeyer> hazmat: Yeah, what I don't get is why are we namespacing a particular aspect, rather than having a root that contains everything
[21:11] <hazmat> jimbaker, yeah.. the distro installation should be sorted  out shortly as soon as we have a s/ensemble/juju packakage downstream
[21:11] <jimbaker> niemeyer, ok, as is, we have an impasse until the release. it means between the point of merging this branch, assuming it is good, and the release, environments.yaml would need to point to the ppa with juju-origin to retain the current behavior
[21:12] <jimbaker> and have anything work at all
[21:12] <hazmat> niemeyer, effectively the data-dir is that root, but lxc itself wants a flat storage of containers into /var/lib/lxc
[21:12] <hazmat> hence the namespace value
[21:12] <hazmat> s/value/prefix
[21:12] <niemeyer> hazmat: I see.. but it looks like we're using a JUJU_HOME like:
[21:12] <niemeyer> juju_directory="/var/lib/juju"
[21:13] <hazmat> niemeyer, that's in the container for the unit agent.. for the machine agent juju home is the data dir configured in environments.yaml
[21:13] <niemeyer> Or maybe not.. /me looks
[21:14] <niemeyer> hazmat: I see.. hmmm
[21:15] <niemeyer> hazmat: It just feels like we're adding yet another concept there, which doesn't map to something concrete (a unit namespace)
[21:15] <jimbaker> niemeyer, should i submit this as a diff merge proposal and pull the one for env-origin? it's taking a different approach, much closer to trunk, given the simpler usage of ppa (and no deb repository), and the removal of juju-version
[21:15] <niemeyer> hazmat: Ok, question:
[21:16] <niemeyer> hazmat: Would it work if we had JUJU_CONTAINER=<the container name> instead, explicitly?
[21:16] <niemeyer> hazmat: Or is there any other reason to have the UNIT_NS information?
[21:16] <niemeyer> jimbaker: Hmm
[21:18] <hazmat> niemeyer, we're passing namespace to the machine agent which is creating multiple containers
[21:19] <aram> moin.
[21:25] <niemeyer> hazmat: Ok
[21:25] <niemeyer> hazmat: Still feel a bit strange about it.. feels like if two users try to use it there are clashing resources
[21:26] <hazmat> niemeyer, the namespace includes the user name.. where's the clash?
[21:27] <niemeyer> hazmat: Maybe it's just my impression.. let me see
[21:27] <niemeyer> +    def get_upstart_unit_job(self, machine_id, zookeeper_hosts):
[21:27] <niemeyer> +        """Return a string containing the  upstart job to start the unit agent.
[21:27] <niemeyer> +        """
[21:27] <niemeyer> +        environ = self.get_environment(machine_id, zookeeper_hosts)
[21:27] <niemeyer> +        # Keep qualified locations within the container for colo support
[21:27] <niemeyer> +        environ["JUJU_HOME"] = "/var/lib/juju"
[21:28] <niemeyer> Sorry.. longer than expected
[21:28] <niemeyer> hazmat: /var/lib/juju?
[21:28] <hazmat> niemeyer, that's for the unit agent in the container
[21:29] <niemeyer> hazmat: Will this path be used in any way?
[21:29] <hazmat> niemeyer, it will be used in the same manner its used for non local dev
[21:29] <niemeyer> hazmat: Right, but if there are two different environments for two different users, will they conflict given that the setting is the same, or is it ok?
[21:29] <hazmat> ie.. the unit agent will store pid, log files there
[21:30] <hazmat> niemeyer, those paths are inside the unit container
[21:30] <niemeyer> hazmat: Ah, inside, ok
[21:31] <hazmat> actually i switched the unit agent in the container to log to /var/log/juju  so its mostly just workflow state that's in juju home..
[21:31] <niemeyer> hazmat: Cool
[21:32] <niemeyer> hazmat: Somewhat unrelated,
[21:32] <niemeyer> hazmat: Shouldn't this depend on a setting:
[21:32] <niemeyer> +            {"JUJU_ORIGIN": "ppa",
[21:32] <niemeyer> +             "JUJU_SOURCE": "ppa:juju/ppa"},
[21:32] <niemeyer> hazmat: It's also wrong, btw.. ppa:juju/pkgs
[21:32] <hazmat> niemeyer, aha, i was wondering why that failed.. there's a number of fixes for the origin work in bcsaller lxc-library-clone work
[21:33] <niemeyer> hazmat: Cool
[21:33] <hazmat> i ended up just merging his work when i want to test the whole thing functionally at the end of the pipeline
[21:34] <hazmat> also fwiw i'm going to be attending my son's track meet for the next 2.5 hrs.. i'm taking a hotspot with me.. but the connectivity might be spotty
[21:36] <niemeyer> hazmat: Alright! Let's move forward with this then!
[21:36] <hazmat> niemeyer, sweet!
[21:37] <niemeyer> hazmat: I feel like stuff is going over my head due to the heavy system interaction
[21:37] <niemeyer> hazmat: But what really matters at this point is that this is working, and it's a base we can improve on
[21:37] <hazmat> niemeyer, its pretty cool when you see it working
[21:37] <hazmat> bcsaller, any chance you'll be to address the lxc-clone-lib review items today?
[21:38]  * hazmat steps out for a few
[21:39] <niemeyer> I'll get some coffee too
[21:39] <bcsaller> hazmat: fully expect to
[21:55] <niemeyer> jimbaker: ping
[21:55] <jimbaker> niemeyer, hi
[21:55] <niemeyer> jimbaker: So
[21:56] <niemeyer> jimbaker: What we can try to do is to detect the default behavior, in case the option is missing
[21:56] <jimbaker> niemeyer, not certain what you mean
[21:57] <niemeyer> jimbaker: A good way to do it is checking if there's an installed package that comes from the ppa
[21:58] <niemeyer> jimbaker: Check the output of "apt-cache policy juju"
[21:59] <niemeyer> jimbaker: Use the following heuristics:
[22:00] <jimbaker> niemeyer, hmmm... so you mean, implement the following workaround: as part of the cloud init, run a script that tests for a necessary dependency (so checking output above), and if not there, adds the repository
[22:00] <niemeyer> 1) Grab the installed version from the "Installed:" line
[22:01] <niemeyer> 2) If this is "(none)" return "branch"
[22:01] <niemeyer> 3) Otherwise, store the version in a variable, and keep parsing
[22:02] <niemeyer> 4) Find the version table, and find the installed version
[22:03] <niemeyer> 5) For each line in the given version, if it contains "ppa.launchpad.net/juju/pkgs", return "ppa"
[22:03] <niemeyer> 6) If you reach the end of the installed version, return "distro"
[22:04] <niemeyer> jimbaker: name the function get_default_origin()
[22:04] <niemeyer> jimbaker: Rather than "branch", it should actually return a proper value for the origin setting
[22:04] <niemeyer> jimbaker: "lp:juju", I suppose?
[22:06] <jimbaker> niemeyer, lp:juju seems reasonable as a workaround, this would correspond to our old policy pre ppa
[22:06] <niemeyer> jimbaker: Cool.. I suppose that's a valid setting for "juju-origin: lp:juju" once you're branch goes in, right?
[22:06] <jimbaker> niemeyer, yes, that's a valid juju-origin setting
[22:07] <niemeyer> jimbaker: Cool
[22:07] <niemeyer> jimbaker: In case of errors or unexpected input, always fall back to "distro"..
[22:07] <niemeyer> jimbaker: We can easily fix the ppa or trunk, but can't fix the distro
[22:07] <niemeyer> (that quickly)
[22:07] <jimbaker> ok, this seems like a reasonable workaround and helps the user run what they expect w/o having to set anything
[22:08] <niemeyer> jimbaker: Super
[22:08] <niemeyer> SpamapS, m_3: Any comments on the heuristics?
[22:09] <niemeyer> jimbaker: Also, please log a line on "bootstrap" pointing out the origin being used, so the user is aware of it
[22:09] <SpamapS> niemeyer: readnig
[22:09] <SpamapS> reading even
[22:09] <jimbaker> niemeyer, sounds good
[22:12] <SpamapS> niemeyer: +1 looks good, there's a python library for that apt information.. but execing apt-cache policy is probably simpler for the immediate needs.
[22:12] <niemeyer> SpamapS: and more portable as well
[22:12] <niemeyer> SpamapS: Cool, thanks
[22:13] <niemeyer> Alright folks
[22:13] <niemeyer> I'm going to step out for some time.. will be working a bit over the weekend to push things forward
[22:13] <SpamapS> niemeyer: one final bit, you may want to verify that the juju thats running is actually from the package..
[22:13] <niemeyer> SpamapS: Ah, good point
[22:14] <SpamapS> dpkg -S isn't fast, but for one boot-time check it might be worth not picking the wrong behavior when something else has pulled in juju for some other reason.
[22:15] <niemeyer> jimbaker: At the forefront of the function, check if juju.__file__ comes from /usr.. if it doesn't, we can use the branch as well
[22:16] <jimbaker> niemeyer, makes sense
[22:16] <niemeyer> Alright, I'm heading out.. have a good weekend folks!
[22:17] <niemeyer> or see you tomorrow :)
[22:21] <_mup_> juju/unittests r6 committed by jim.baker@canonical.com
[22:21] <_mup_> Allow churn to run standalone
[22:28] <zirpu> is juju only going to support AWS, or will it support other cloud providers in the future? rackspace, etc.
[22:30] <_mup_> juju/unittests r7 committed by jim.baker@canonical.com
[22:30] <_mup_> Better error handling
[22:32] <zirpu> arg.  charm-tools is busted.  /usr/bin/charm -> /build/buildd/charm-tools-0.2+bzr68~natty1/debian/charm-tools/usr/share/charm-tools/charm
[22:33] <zirpu> as is principia
[22:37] <SpamapS> zirpu: yeah I'm working on that right now actually
[22:38] <zirpu> ah. good. i won't bother trying to figure out where to open a but then. :)
[22:38] <zirpu> didn't seem to be anything for charm-tools w/in juju. separate package?
[22:38] <SpamapS> it will work in about 10 minutes
[22:38] <SpamapS> charm-tools is a stop-gap until juju has native repo support
[22:39] <zirpu> ah. ok. thanks.
[22:40] <SpamapS> zirpu: and for your other question, aws is just the beginning. :)
[22:40] <zirpu> excellent!
[22:41] <SpamapS> zirpu: bare metal is already available via orchestra (documentation almost done ;)
[22:41] <SpamapS> zirpu: and I'm sure somebody will do another cloud provider if they need one. :)
[22:42] <SpamapS> zirpu: a new charm-tools is building now.. should be available in the ppa in the next 10 minutes or so
[22:43] <zirpu> cool. thanks.
[22:46] <zirpu> does orchestra use virtualbox/VMware/etc? or just for bare metal provisioning?
[22:56] <SpamapS> zirpu: orchestra is cobbler + ubuntuisms , and cobbler has some libvirt integration
[22:58] <SpamapS> zirpu: but really.. if you want to control VM's w/ it.. just use openstack
[23:07] <SpamapS> zirpu: in theory it will work w/ Eucalyptus too.. tho nobody's been testing that lately :p
[23:07] <SpamapS> hazmat: hey, is this the test you broke: juju.providers.ec2.tests.test_provider.ProviderTestCase.test_config_environment_extraction
[23:09] <zirpu> heh
[23:10] <SpamapS> zirpu: just curious, what were you wanting to do with juju?
[23:12] <zirpu> i'm just looking into ways of deploying to the various providers.  juju seems fairly clean so far.  i don't want to write my own set of scripts.
[23:12] <zirpu> i'll probably use openstack for the vbox server i have at home for testing.
[23:13] <SpamapS> zirpu: we want you to write your own set of scripts.. that are called by juju ;)
[23:13] <SpamapS> zirpu: the dev team is landing LXC container integration right now actually.. so you can do local dev work w/o VM's .. just little LXC containers.
[23:13] <zirpu> right. charms and environments right?
[23:13] <zirpu> ooh. lxc would be better.
[23:14] <SpamapS> yeah, charms are the way a service is defined. environments are runtime collections of services, relations, machines and charms
[23:14] <SpamapS> zirpu: eventually all the services will run inside LXC containers so its easier to re-use an available AWS machine w/o terminating/provisioning an new one.
[23:15] <zirpu> lxc on aws? :-) turtles! all the way down!
[23:15] <SpamapS> LXC works great on AWS
[23:15] <SpamapS> its how we've been testing openstack .. :)
[23:15] <SpamapS> or at least, one way
[23:16] <SpamapS> zirpu: btw, charm-tools works now
[23:16] <zirpu> cool. i'll upgrade.
[23:16] <zirpu> about to head out to dinner. will work on it later though.
[23:17] <zirpu> i want to build a replica set of mongodb's to test failover and such. that's my current test app.
[23:18] <SpamapS> zirpu: lp:charm/oneiric/mongodb .. :)
[23:20] <zirpu> yep, already looked at that.