=== elmo_ is now known as elmo [01:19] <_mup_> Bug #939164 was filed: add support to lxc provider backend in go port < https://launchpad.net/bugs/939164 > === andrewsmedina_ is now known as andrewsmedina === bradm_ is now known as bradm [05:35] <_mup_> juju/enhanced-relation-support r8 committed by jim.baker@canonical.com [05:35] <_mup_> Use cases and impl details === grapz is now known as grapz_afk === grapz_afk is now known as grapz [10:32] hi all [10:32] hazmat: I got juju mostly working on osx, thanks to the brew formula you sent me [10:32] I have a problem though [10:32] ยป juju deploy --repository . local:cf-redis [10:32] 2012-02-23 11:31:58,200 INFO Connecting to environment... [10:32] 2012-02-23 11:32:02,619 INFO Connected to environment. [10:32] [Errno 21] Is a directory [10:32] 2012-02-23 11:32:02,822 ERROR [Errno 21] Is a directory [10:33] what i did is : bzr branch lp:charms/cf-redis [10:33] am I suppose to tell juju to deploy a file ? === grapz is now known as grapz_afk === grapz_afk is now known as grapz === grapz is now known as grapz_afk === grapz_afk is now known as grapz [12:36] It is possible to stop an EC2 juju environment and then restart it later? If so, is there a particular procedure for doing it correctly? [12:47] It looks like it might work, but the fact that restarted machines get new IPs is keeping the zookeeper machine from being able to connect to the others. === grapz is now known as grapz_afk [12:48] I wonder if it would work if I update the IPs in ZK. === grapz_afk is now known as grapz === grapz is now known as grapz_afk === grapz_afk is now known as grapz [13:26] xerxas, a repository is a directory containing subdirs for individual ubuntu series; the charms for a particular series go in the appropriate subdir [13:26] xerxas, juju wants a particular repository structure.. ie.. try this.. mkdir -p myrepo/oneiric && cd myrepo/oneiric && bzr branch lp:charms && cd .. [13:26] hazmat, that's ludicrous timing [13:26] whoops cd../.. [13:27] fwereade_, indeed it is.. serendipity.. good morning/afternoon [13:27] hazmat: I think that's pretty much what I have ... [13:27] hazmat, good afternoon/morning :) [13:27] but I'll retry [13:28] xerxas, try it with juju -v deploy as well.. it should give the full traceback on errors then [13:40] It is possible to stop an EC2 juju environment and then restart it later? If so, is there a particular procedure for doing it correctly? [13:52] benji, its not supported.. [13:52] darn [13:52] benji, with the restart support fwereade_ added, its concieveable.. [13:52] benji, but the ip address changes are a bit more problematic [13:53] hmm [13:53] I don't think it's too far off. The main problem would appear to be that the machine's IP addresses change. [13:53] the machine IDs are consistent so refreshing the IPs seems plausible [13:53] benji, yeah.. we could tell the agents to go fetch zk servers directly from the s3 block [13:54] benji, right, but outside of the provisioning agent, none of the agent is supposed to have provider credential access [13:54] so machine id by itself isn't helpful.. [13:55] we'd have to modify the provisioning agent to check/update those addresses in the s3 bucket at startup, which isn't so bad [13:56] benji, and the actual shutdown/resume would be done out of band [13:56] hazmat: #juju-dev is now logged [13:56] unfortunately, we have a charm that takes a *long* time to set up (1+ hours), and it has to run on an m1.large instance but we don't really want to have to run it 24-7 when we're only in development [13:57] have you looked at using LXC while developing? [13:58] we've been using LXC too, but the charm uses LXC itself, and since LXC isn't turtles-all-the-way-down yet, we couldn't keep using it [13:58] ah === Furao_ is now known as Furao [14:14] jcastro, thank you [14:20] m_3, hi. Please consider this a gentle nag to complete our charm review. :-) Thank you. === Furao_ is now known as Furao [14:30] ok guys, I've got approval to launch this bad boy: https://juju.ubuntu.com/CharmContest [14:30] writing up the blog post now. [14:31] so if you have buddies that want to win some Amazon gift cards ... [14:44] <_mup_> Bug #939552 was filed: Juju should support MAAS as a provider < https://launchpad.net/bugs/939552 > [15:09] Hi, I'm trying to run juju with lxc on precise. I've got to the stage where it's running the lxc-start command and starting the container, but the charm (any) remains pending [15:10] after connecting with lxc-console I can see that the agent isn't running [15:11] trying to start it manually I see or looking in the /var/log/juju logs, I see that the command arguments passed to the python module are invalid... -> http://paste.ubuntu.com/854086/ [15:12] TREllis_, that looks like a version disconnect.. newer version on host, older version on the container.. there where some recent changes to how agents get started [15:13] TREllis_, are you using the ppa on the host or precise packages? [15:14] hazmat: ah-ha. [15:15] TREllis_, you can specify what the container/launched machine uses for a version with the juju-origin setting in environments.yaml.. a value of 'ppa' should do it [15:15] hazmat: I'm using precise+ppa on host, 0.5+bzr464-1juju2~precise1 which is the ppa version afaik. I see the archive has a slightly older one in precise. That'll be the issue. I guess that can be fixed by juju-origin: ppa in environments.yaml? [15:15] ah yeah that :) [15:15] it should be autodetecting the value for juju-origin [15:15] the correct value that is [15:16] hazmat: ah ok, right, I had it set manually to distro. My bad === TREllis_ is now known as TREllis [15:17] hazmat: thanks! [15:25] jcastro, is the blog post published? If so, url? [15:25] http://cloud.ubuntu.com/2012/02/juju-charm-contest-help-bring-free-software-into-the-cloud/ [15:25] https://juju.ubuntu.com/CharmContest [15:25] complete rules there [15:27] hazmat: yep "state: started" that was it - thanks! [15:28] thank you :) [15:31] gary_poster: thanks, will do [15:32] hazmat: IMO, this is somewhat madness.. determining where to get juju from where juju was installed from. I think its time to start looking at having bootstrap freeze and distribute the version of juju on the client in file storage. [15:32] thank you [15:33] jcastro: whoohoo! [15:33] heya m_3, no pressure on that summit charm, hahah [15:33] jcastro: unhuh... :) [15:35] SpamapS, i don't think its madness, but distributing juju via provider storage has a lot of merit as well for cross distro release environment upgrades [15:36] http://news.ycombinator.com/item?id=3625153 [15:36] shooting for my first submission to HN! [15:38] hazmat: well at least if we do use apt to try and determine where juju was installed from, we should pin the version [15:38] SpamapS, and if that version isn't avail anymore? [15:38] hazmat: you get a clear failure [15:38] instead of this "oops something weird went wrong" [15:39] ERROR: your client version is not available in your specified juju-origin (ppa) [15:39] SpamapS, you don't though.. you just get a started machine without a running agent.. this applies to the provisioning agent launching machines for new service units as well [15:39] hazmat: no I want the client to ERROR [15:39] i guess that doesn't apply if its already in storage [15:40] Upvoted! [15:40] but for apt cases it does [15:40] hazmat: right... so much simpler if we make juju responsible for deploying itself [15:43] hazmat: I acknowledge that this is *not* a tiny or super simple change... but I think its going to be necessary. [15:44] SpamapS, that's not making 12.04.. [15:45] oh snap... I just saw an ad for season 2 of game of thrones and got goose bumps [15:45] <--- nerd [15:46] hazmat: 12.04 is *supposed* to be done. [15:46] hazmat: feature freeze was a week ago. The release team can tell us togo to hell at this point. ;) [17:09] Project oneiric-ec2-charm-bitlbee build #1: FAILURE in 1.3 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-ec2-charm-bitlbee/1/ [17:10] Project oneiric-local-charm-bitlbee build #1: FAILURE in 1.8 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-local-charm-bitlbee/1/ [17:23] robbiew: I love how no one noticed that mmims shaved the mountain man face. [17:23] lol...eh...I've seen him both ways....now if he shaved HIS HEAD, I'd noticed [17:24] m_3: love the charmbot! [17:26] jamespage: ha! I was just looking at the time to see if I could catch you [17:26] m_3, not for long... [17:26] I'm getting build-publisher login barfs... let's go PM [17:30] BTW, there'll be some noise in the channel for a bit while testing tests [17:30] Project oneiric-ec2-charm-bitlbee build #2: STILL FAILING in 0.88 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-ec2-charm-bitlbee/2/ [17:36] Please tell me what is charm contest? [17:36] jiltdil: https://juju.ubuntu.com/CharmContest [17:37] m_3, Thanks [17:39] sure thing [17:40] ok [17:50] Project oneiric-ec2-charm-bitlbee build #4: STILL FAILING in 0.36 sec: https://jenkins.qa.ubuntu.com/job/oneiric-ec2-charm-bitlbee/4/ [17:54] charmbot: YOU ARE SO AWESOME! [17:54] SpamapS you may not issue bot commands in this chat! [17:55] tho.. a bit cold [17:56] SpamapS: :) [17:56] SpamapS: we can enable kick-offs through the bot too [17:57] "...we are being robbed of an art form as ancient as computers themselves." [17:57] http://tatiyants.com/devops-is-ruining-my-craft/ [17:57] a must read for anyone [18:00] nice [18:06] gosh we have a bot here now in channel? [18:07] koolhead17: yup, another one... mup's for bugs, charmbot's for tests [18:07] * koolhead17 says hi to charmbot [18:07] m_3: cool!! [18:08] the links are still not live, but they should be tomorrow europe-time [18:45] hazmat: frickin love snapshot restore!! [18:46] m_3, nice! [18:47] m_3, i was figuring we could make some minor additions to the loader to have it work against status output would also be a nice tool [18:48] m_3, the only major limitation i can think of on the snapshot restore, is if you start having relations between services created after the snapshot and those before, then it breaks isolation [18:49] hazmat: and sort of an impl of stacks :) [18:49] right [18:49] m_3, i don't know what those are ;-) [18:49] gotta snapshot early in the process [18:50] m_3, yeah.. it probably makes sense to have a reset option just to go back to pristine.. [18:50] but that really does smell like a set of staged commits... that can be rolled back even [18:50] which is actually the more common usage for testing [18:50] hazmat: true [18:50] you want MPs or you wanna make it owned by a group instaed of just you? [18:51] (path issues) [18:51] loaders trying to read plans from a different place than the planner put them [18:52] I can split it out into separate commands... but... [19:28] jamespage, those charms are looking pretty good [19:28] * hazmat pokes out at the hbase and zk charms [19:29] jamespage, just curious if you where going to split out the hbase into hdfs nodes as well (which is already extant i thought).. or does it make more sense to just run those inline for locality [19:29] m_3, mps [19:30] m_3, or a group is fine.. [19:30] m_3, any suggestions on group? [19:30] maybe juju-jitsu [19:31] hazmat: I was pondering the same thing myself (splitting the hdfs nodes) [19:31] that way they could also be shared with mapreduce. [19:32] hazmat: rolling restarts of the hbase-slave nodes is not great ATM - I'm working around the issue with a staggered sleep to stop them loosing the ROOT and META regions. [19:33] jamespage, hmm.. yeah.. we need a good recipe for doing that with peer relations to coordinate a rolling op [19:40] hazmat: average cluster size is 140 datanodes... goes up to thousands... peer relations would just explode [19:40] hazmat: juju-jitsu's fine for charmrunner [19:43] m_3: I think thats where its important to break a cluster up into multiple service instances [19:43] datacluster1, datacluster2 etc... [19:43] right [19:43] they can all still talk to one master service [19:43] exactly... that should scale nicely [19:43] even to thousands of nodes [19:43] but it allows you to manage them as descrete regions in a deployment [19:44] and roll [19:45] not sure what the price would be as far as data integrity... [19:45] first thought is to shard across these datasets, [19:45] but then that wouldn't really support a rolling upgrade [19:45] we need mirroring too [20:03] m_3, explode is a strong word.. they'd work fine, but they'd be chatty, but if your talking about lock step distributed coordination.. it is what it is. [20:03] hazmat: :) [20:04] m_3, it might be a little nicer with an explicit leader election which would be alot less chatty.. but even then we'd need to communicate that and allow for the next leader, which would again be via peer rel data [20:06] hazmat: would love to see that realized with a single leader [20:06] or ideally cascading groups of local leaders [20:07] but it gets pretty big pretty quick if pure peers have to talk to everybody else [20:10] hazmat: the good news is that there's nice global seqn number that's ordered and easily divided [20:11] * m_3 flashbacks to PE classes... duck duck goose [20:34] m_3, that works for a more static leader based on the presence/availability of a node, but for a transient leader, effectively a lock around a rolling operation, feels like it wants for a different primitive. [20:40] Yippie, build fixed! [20:40] Project oneiric-local-charm-bitlbee build #2: FIXED in 42 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-bitlbee/2/ [20:47] whoohoo! [20:50] Project oneiric-local-charm-byobu-classroom build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-byobu-classroom/1/ [20:50] Project oneiric-local-charm-cassandra build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cassandra/1/ [20:50] Project oneiric-local-charm-ceph build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-ceph/1/ [20:50] Project oneiric-local-charm-cf-mongodb build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-mongodb/1/ [20:50] Project oneiric-local-charm-cf-mysql build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-mysql/1/ [20:50] Project oneiric-local-charm-cf-redis build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-redis/1/ [20:50] Project oneiric-local-charm-cloudfoundry-server build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cloudfoundry-server/1/ [20:50] Project oneiric-local-charm-cloudfoundry-server-dea build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cloudfoundry-server-dea/1/ [20:50] Project oneiric-local-charm-couchbase build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-couchbase/1/ [20:50] Project oneiric-local-charm-couchdb build #1: FAILURE in 6.1 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-couchdb/1/ [20:51] Project oneiric-local-charm-etherpad-lite build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-etherpad-lite/1/ [20:51] Project oneiric-local-charm-ganglia build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-ganglia/1/ [20:51] Project oneiric-local-charm-hadoop-mapreduce build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-mapreduce/1/ [20:51] Project oneiric-local-charm-hadoop-master build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-master/1/ [20:51] Project oneiric-local-charm-hadoop-slave build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-slave/1/ [20:51] Project oneiric-local-charm-haproxy build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-haproxy/1/ [20:51] Project oneiric-local-charm-jenkins build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-jenkins/1/ [20:51] Project oneiric-local-charm-jenkins-slave build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-jenkins-slave/1/ [20:51] Project oneiric-local-charm-limesurvey build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-limesurvey/1/ [20:52] Project oneiric-local-charm-mediawiki build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-mediawiki/1/ [20:52] Project oneiric-local-charm-membase build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-membase/1/ [20:52] Project oneiric-local-charm-memcached build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-memcached/1/ [20:52] hmmm... these'll clear through in a sec... sorry gang [20:52] Project oneiric-local-charm-minecraft build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-minecraft/1/ [20:52] Project oneiric-local-charm-mongodb build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-mongodb/1/ [20:52] Project oneiric-local-charm-munin build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-munin/1/ [20:54] I'll move them out to ##charmbot-test while debugging [21:07] bcsaller, can you take a look at https://code.launchpad.net/~hazmat/juju/env-from-env/+merge/93321 ? [21:07] it's really a trivial, and will help m_3 help out on his charm testing [21:07] looking [21:10] approved [21:10] bcsaller, thanks! [21:20] m_3, yeah.. not sure that bot should ever be round these parts [21:20] its rather distracting ;-) [21:28] hazmat: yup... any suggestions for where it should live? [21:29] we need ways to notify charm authors when stuff is breaking [21:31] m_3, i would think email for the charm author, and either a secondary mailing list or irc channel [21:32] m_3, it may not always be valid or available, but the bzr commits do contain some identity information, its somewhat freeform, but 98% it appears to be a valid email, minus some of yours ;-) [21:33] m_3, better would be an explicit email/maintainer field for a charm [21:58] I actually do think the bot should be here... but only notify us of changes in state after the tests are stable. [21:58] hazmat: I like having it here. This is really where users and charmers go to discuss charms. Raises awareness. [22:01] SpamapS, stable means not filling spamming the channel? [22:02] hazmat: yeah I think that is more a product of the tests still being very new. [22:02] once they're passing reliably over and over and over, having them here is a great way to raise awareness of their existence. [22:09] gotcha... well I'll leave them in ##charmbot-test until stable... then move them back [22:23] SpamapS: we need to talk slides for our webinar tomorrow [22:23] let me rephrase, we should talk tomorrow about our slides, before the webinar rehearsal on tuesday [22:24] jcastro: thank you, heart attach averted [22:24] attack even [22:27] hmm [22:34] jcastro: around [22:35] yeah for a few minutes [22:39] jcastro: http://atuljha.com/blog/2012/02/24/newbies-guide-to-joining-cloud-ecosystem/ [22:39] what do u think of this :P [22:49] all: I'd appreciate a final review/+1 for the tests spec here: https://codereview.appspot.com/5624044/diff/7001/source/charm-tests.rst [23:15] Hmm [23:16] hazmat: was looking at your 'env-from-env' branch, and thinking, what about JUJU_REPOSITORY to allow setting the repository in the env too? That would basically eliminate the need for a wrapper to run the tests (assuming we can also override the default charm namespace in a similar fashion) [23:36] SpamapS yeah... that's seems pretty reasonable [23:37] SpamapS, i think we had discussed with niemeyer_ possibly sticking that in environments.yaml [23:37] SpamapS, i don't know that would remove the need for the wrapper [23:37] SpamapS, since the wrapper is still obviating the need for specifying the explicit namespace on the cli.. ie local:charm [23:41] hazmat: I also want to add a JUJU_CHARM_NAMESPACE too :) [23:45] SpamapS, fail ;-) [23:45] hazmat: that way the tests are really just running with a cultivated environment, rather than some special wrapper [23:46] SpamapS, yeah.. that one is a little more odd, i'd really have to defer to the tastemaster [23:46] lol [23:46] SpamapS, but wouldn't you have the wrapper even with that for the extra commands [23:47] hazmat: removed the extra commands for now [23:47] which i still should be a different command to avoid confusion. [23:47] SpamapS, cool [23:47] SpamapS, what about using environment fixtures for loading [23:48] i guess its equiv to the cli.. but it would give a more defined data structure for us to manipulate the cultured environment [23:48] I think both should be possible, but doing it in a static file is not helpful to automated testing... just increases the crap we have to cleanup/edit on tests. [23:49] SpamapS, yeah.. its a little too manicured for ad hoc test scenario construction [23:50] hazmat: eventually repository definitely should be in ~/.juju somewhere .. so people can just share that dir or environments.yaml and get the same result [23:51] SpamapS, that's seems a bit like arbitrary structure [23:52] hazmat: ESHORTHAND .. what?