/srv/irclogs.ubuntu.com/2012/02/23/#juju.txt

=== elmo_ is now known as elmo
_mup_Bug #939164 was filed: add support to lxc provider backend in go port <juju:New> < https://launchpad.net/bugs/939164 >01:19
=== andrewsmedina_ is now known as andrewsmedina
=== bradm_ is now known as bradm
_mup_juju/enhanced-relation-support r8 committed by jim.baker@canonical.com05:35
_mup_Use cases and impl details05:35
=== grapz is now known as grapz_afk
=== grapz_afk is now known as grapz
xerxashi all10:32
xerxashazmat: I got juju mostly working on osx, thanks to the brew formula you sent me10:32
xerxasI have a problem though10:32
xerxasยป juju deploy --repository . local:cf-redis10:32
xerxas2012-02-23 11:31:58,200 INFO Connecting to environment...10:32
xerxas2012-02-23 11:32:02,619 INFO Connected to environment.10:32
xerxas[Errno 21] Is a directory10:32
xerxas2012-02-23 11:32:02,822 ERROR [Errno 21] Is a directory10:32
xerxaswhat i did is : bzr branch lp:charms/cf-redis10:33
xerxasam I suppose to tell juju to deploy a file ?10:33
=== grapz is now known as grapz_afk
=== grapz_afk is now known as grapz
=== grapz is now known as grapz_afk
=== grapz_afk is now known as grapz
benjiIt is possible to stop an EC2 juju environment and then restart it later?  If so, is there a particular procedure for doing it correctly?12:36
benjiIt looks like it might work, but the fact that restarted machines get new IPs is keeping the zookeeper machine from being able to connect to the others.12:47
=== grapz is now known as grapz_afk
benjiI wonder if it would work if I update the IPs in ZK.12:48
=== grapz_afk is now known as grapz
=== grapz is now known as grapz_afk
=== grapz_afk is now known as grapz
fwereade_xerxas, a repository is a directory containing subdirs for individual ubuntu series; the charms for a particular series go in the appropriate subdir13:26
hazmatxerxas, juju wants a particular repository structure.. ie.. try this.. mkdir  -p myrepo/oneiric && cd myrepo/oneiric && bzr branch lp:charms  && cd ..13:26
fwereade_hazmat, that's ludicrous timing13:26
hazmatwhoops cd../..13:26
hazmatfwereade_, indeed it is.. serendipity.. good morning/afternoon13:27
xerxashazmat:  I think that's pretty much what I have ...13:27
fwereade_hazmat, good afternoon/morning :)13:27
xerxasbut I'll retry13:27
hazmatxerxas, try it with juju -v deploy  as well.. it should give the full traceback on errors then13:28
benjiIt is possible to stop an EC2 juju environment and then restart it later?  If so, is there a particular procedure for doing it correctly?13:40
hazmatbenji, its not supported..13:52
benjidarn13:52
hazmatbenji, with the restart support fwereade_ added, its concieveable..13:52
hazmatbenji, but the ip address changes are a bit more problematic13:52
hazmathmm13:53
benjiI don't think it's too far off.  The main problem would appear to be that the machine's IP addresses change.13:53
benjithe machine IDs are consistent so refreshing the IPs seems plausible13:53
hazmatbenji, yeah.. we could tell the agents to go fetch zk servers directly from the s3 block13:53
hazmatbenji, right, but outside of the provisioning agent, none of the agent is supposed to have provider credential access13:54
hazmatso machine id by itself isn't helpful..13:54
hazmatwe'd have to modify the provisioning agent to check/update those addresses in the s3 bucket at startup, which isn't so bad13:55
hazmatbenji, and the actual shutdown/resume would be done out of band13:56
jcastrohazmat: #juju-dev is now logged13:56
benjiunfortunately, we have a charm that takes a *long* time to set up (1+ hours), and it has to run on an m1.large instance but we don't really want to have to run it 24-7 when we're only in development13:56
jcastrohave you looked at using LXC while developing?13:57
benjiwe've been using LXC too, but the charm uses LXC itself, and since LXC isn't turtles-all-the-way-down yet, we couldn't keep using it13:58
jcastroah13:58
=== Furao_ is now known as Furao
hazmatjcastro, thank you14:14
gary_posterm_3, hi.  Please consider this a gentle nag to complete our charm review. :-)  Thank you.14:20
=== Furao_ is now known as Furao
jcastrook guys, I've got approval to launch this bad boy: https://juju.ubuntu.com/CharmContest14:30
jcastrowriting up the blog post now.14:30
jcastroso if you have buddies that want to win some Amazon gift cards ...14:31
_mup_Bug #939552 was filed: Juju should support MAAS as a provider <juju:New> < https://launchpad.net/bugs/939552 >14:44
TREllis_Hi, I'm trying to run juju with lxc on precise. I've got to the stage where it's running the lxc-start command and starting the container, but the charm (any) remains pending15:09
TREllis_after connecting with lxc-console I can see that the agent isn't running15:10
TREllis_trying to start it manually I see or looking in the /var/log/juju logs, I see that the command arguments passed to the python module are invalid... -> http://paste.ubuntu.com/854086/15:11
hazmatTREllis_, that looks like a version disconnect.. newer version on host, older version on the container.. there where some recent changes to how agents get started15:12
hazmatTREllis_, are you using the ppa on the host or precise packages?15:13
TREllis_hazmat: ah-ha.15:14
hazmatTREllis_, you can specify what the container/launched machine uses for a version with the juju-origin setting in environments.yaml..  a value of  'ppa' should do it15:15
TREllis_hazmat: I'm using precise+ppa on host, 0.5+bzr464-1juju2~precise1 which is the ppa version afaik. I see the archive has a slightly older one in precise. That'll be the issue. I guess that can be fixed by juju-origin: ppa in environments.yaml?15:15
TREllis_ah yeah that :)15:15
hazmatit should be autodetecting the value for juju-origin15:15
hazmatthe correct value that is15:15
TREllis_hazmat: ah ok, right, I had it set manually to distro. My bad15:16
=== TREllis_ is now known as TREllis
TREllishazmat: thanks!15:17
cliff-hmjcastro, is the blog post published? If so, url?15:25
jcastrohttp://cloud.ubuntu.com/2012/02/juju-charm-contest-help-bring-free-software-into-the-cloud/15:25
jcastrohttps://juju.ubuntu.com/CharmContest15:25
jcastrocomplete rules there15:25
TREllishazmat: yep "state: started" that was it - thanks!15:27
cliff-hmthank you :)15:28
m_3gary_poster: thanks, will do15:31
SpamapShazmat: IMO, this is somewhat madness.. determining where to get juju from where juju was installed from. I think its time to start looking at having bootstrap freeze and distribute the version of juju on the client in file storage.15:32
gary_posterthank you15:32
m_3jcastro: whoohoo!15:33
jcastroheya m_3, no pressure on that summit charm, hahah15:33
m_3jcastro: unhuh... :)15:33
hazmatSpamapS, i don't think its madness, but distributing juju via provider storage has a lot of merit as well for cross distro release environment upgrades15:35
jcastrohttp://news.ycombinator.com/item?id=362515315:36
jcastroshooting for my first submission to HN!15:36
SpamapShazmat: well at least if we do use apt to try and determine where juju was installed from, we should pin the version15:38
hazmatSpamapS, and if that version isn't avail anymore?15:38
SpamapShazmat: you get a clear failure15:38
SpamapSinstead of this "oops something weird went wrong"15:38
SpamapSERROR: your client version is not available in your specified juju-origin (ppa)15:39
hazmatSpamapS, you don't though.. you just get a started machine without a running agent.. this applies to the provisioning agent launching machines for new service units as well15:39
SpamapShazmat: no I want the client to ERROR15:39
hazmati guess that doesn't apply if its already in storage15:39
marcoceppiUpvoted!15:40
hazmatbut for apt cases it does15:40
SpamapShazmat: right... so much simpler if we make juju responsible for deploying itself15:40
SpamapShazmat: I acknowledge that this is *not* a tiny or super simple change... but I think its going to be necessary.15:43
hazmatSpamapS, that's not making 12.04..15:44
SpamapSoh snap... I just saw an ad for season 2 of game of thrones and got goose bumps15:45
SpamapS<--- nerd15:45
SpamapShazmat: 12.04 is *supposed* to be done.15:46
SpamapShazmat: feature freeze was a week ago. The release team can tell us togo to hell at this point. ;)15:46
charmbotProject oneiric-ec2-charm-bitlbee build #1: FAILURE in 1.3 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-ec2-charm-bitlbee/1/17:09
charmbotProject oneiric-local-charm-bitlbee build #1: FAILURE in 1.8 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-local-charm-bitlbee/1/17:10
jcastrorobbiew: I love how no one noticed that mmims shaved the mountain man face.17:23
robbiewlol...eh...I've seen him both ways....now if he shaved HIS HEAD, I'd noticed17:23
jamespagem_3: love the charmbot!17:24
m_3jamespage: ha!  I was just looking at the time to see if I could catch you17:26
jamespagem_3, not for long...17:26
m_3I'm getting build-publisher login barfs... let's go PM17:26
m_3BTW, there'll be some noise in the channel for a bit while testing tests17:30
charmbotProject oneiric-ec2-charm-bitlbee build #2: STILL FAILING in 0.88 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-ec2-charm-bitlbee/2/17:30
jiltdilPlease tell me  what is charm contest?17:36
m_3jiltdil: https://juju.ubuntu.com/CharmContest17:36
jiltdilm_3,  Thanks17:37
m_3sure thing17:39
jiltdilok17:40
charmbotProject oneiric-ec2-charm-bitlbee build #4: STILL FAILING in 0.36 sec: https://jenkins.qa.ubuntu.com/job/oneiric-ec2-charm-bitlbee/4/17:50
SpamapScharmbot: YOU ARE SO AWESOME!17:54
charmbotSpamapS you may not issue bot commands in this chat!17:54
SpamapStho.. a bit cold17:55
m_3SpamapS: :)17:56
m_3SpamapS: we can enable kick-offs through the bot too17:56
SpamapS"...we are being robbed of an art form as ancient as computers themselves."17:57
SpamapShttp://tatiyants.com/devops-is-ruining-my-craft/17:57
SpamapSa must read for anyone17:57
m_3nice18:00
koolhead17gosh we have a bot here now in channel?18:06
m_3koolhead17: yup, another one... mup's for bugs, charmbot's for tests18:07
* koolhead17 says hi to charmbot18:07
koolhead17m_3: cool!!18:07
m_3the links are still not live, but they should be tomorrow europe-time18:08
m_3hazmat: frickin love snapshot restore!!18:45
hazmatm_3, nice!18:46
hazmatm_3, i was figuring we could make some minor additions to the loader to have it work against status output would also be a nice tool18:47
hazmatm_3, the only major limitation i can think of on the snapshot restore, is if you start having relations between services created after the snapshot and those before, then it breaks isolation18:48
m_3hazmat: and sort of an impl of stacks :)18:49
m_3right18:49
hazmatm_3, i don't know what those are ;-)18:49
m_3gotta snapshot early in the process18:49
hazmatm_3, yeah.. it probably makes sense to have a reset option just to go back to pristine..18:50
m_3but that really does smell like a set of staged commits... that can be rolled back even18:50
hazmatwhich is actually the more common usage for testing18:50
m_3hazmat: true18:50
m_3you want MPs or you wanna make it owned by a group instaed of just you?18:50
m_3(path issues)18:51
m_3loaders trying to read plans from a different place than the planner put them18:51
m_3I can split it out into separate commands... but...18:52
hazmatjamespage, those charms are looking pretty good19:28
* hazmat pokes out at the hbase and zk charms19:28
hazmatjamespage, just curious if you where going to split out the hbase into hdfs nodes as well (which is already extant i thought).. or does it make more sense to just run those inline for locality19:29
hazmatm_3, mps19:29
hazmatm_3, or a group is fine..19:30
hazmatm_3, any suggestions on group?19:30
hazmatmaybe juju-jitsu19:30
jamespagehazmat: I was pondering the same thing myself (splitting the hdfs nodes)19:31
jamespagethat way they could also be shared with mapreduce.19:31
jamespagehazmat: rolling restarts of the hbase-slave nodes is not great ATM - I'm working around the issue with a staggered sleep to stop them loosing the ROOT and META regions.19:32
hazmatjamespage, hmm.. yeah.. we need a good recipe for doing that with peer relations to coordinate a rolling op19:33
m_3hazmat: average cluster size is 140 datanodes... goes up to thousands... peer relations would just explode19:40
m_3hazmat: juju-jitsu's fine for charmrunner19:40
jamespagem_3: I think thats where its important to break a cluster up into multiple service instances19:43
jamespagedatacluster1, datacluster2 etc...19:43
m_3right19:43
jamespagethey can all still talk to one master service19:43
m_3exactly... that should scale nicely19:43
m_3even to thousands of nodes19:43
jamespagebut it allows you to manage them as descrete regions in a deployment19:43
m_3and roll19:44
m_3not sure what the price would be as far as data integrity...19:45
m_3first thought is to shard across these datasets,19:45
m_3but then that wouldn't really support a rolling upgrade19:45
m_3we need mirroring too19:45
hazmatm_3, explode is a strong word.. they'd  work fine, but they'd be chatty, but if your talking about lock step distributed coordination.. it is what it is.20:03
m_3hazmat: :)20:03
hazmatm_3, it might be a little nicer with an explicit leader election which would be alot less chatty.. but even then we'd need to communicate that and allow for the next leader, which would again be via peer rel data20:04
m_3hazmat: would love to see that realized with a single leader20:06
m_3or ideally cascading groups of local leaders20:06
m_3but it gets pretty big pretty quick if pure peers have to talk to everybody else20:07
m_3hazmat: the good news is that there's nice global seqn number that's ordered and easily divided20:10
* m_3 flashbacks to PE classes... duck duck goose20:11
hazmatm_3, that works for a more static leader based on the presence/availability of a node, but for a transient leader, effectively a lock around a rolling operation, feels like it wants for a different primitive.20:34
charmbotYippie, build fixed!20:40
charmbotProject oneiric-local-charm-bitlbee build #2: FIXED in 42 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-bitlbee/2/20:40
m_3whoohoo!20:47
charmbotProject oneiric-local-charm-byobu-classroom build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-byobu-classroom/1/20:50
charmbotProject oneiric-local-charm-cassandra build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cassandra/1/20:50
charmbotProject oneiric-local-charm-ceph build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-ceph/1/20:50
charmbotProject oneiric-local-charm-cf-mongodb build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-mongodb/1/20:50
charmbotProject oneiric-local-charm-cf-mysql build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-mysql/1/20:50
charmbotProject oneiric-local-charm-cf-redis build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-redis/1/20:50
charmbotProject oneiric-local-charm-cloudfoundry-server build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cloudfoundry-server/1/20:50
charmbotProject oneiric-local-charm-cloudfoundry-server-dea build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cloudfoundry-server-dea/1/20:50
charmbotProject oneiric-local-charm-couchbase build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-couchbase/1/20:50
charmbotProject oneiric-local-charm-couchdb build #1: FAILURE in 6.1 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-couchdb/1/20:50
charmbotProject oneiric-local-charm-etherpad-lite build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-etherpad-lite/1/20:51
charmbotProject oneiric-local-charm-ganglia build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-ganglia/1/20:51
charmbotProject oneiric-local-charm-hadoop-mapreduce build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-mapreduce/1/20:51
charmbotProject oneiric-local-charm-hadoop-master build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-master/1/20:51
charmbotProject oneiric-local-charm-hadoop-slave build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-slave/1/20:51
charmbotProject oneiric-local-charm-haproxy build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-haproxy/1/20:51
charmbotProject oneiric-local-charm-jenkins build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-jenkins/1/20:51
charmbotProject oneiric-local-charm-jenkins-slave build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-jenkins-slave/1/20:51
charmbotProject oneiric-local-charm-limesurvey build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-limesurvey/1/20:51
charmbotProject oneiric-local-charm-mediawiki build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-mediawiki/1/20:52
charmbotProject oneiric-local-charm-membase build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-membase/1/20:52
charmbotProject oneiric-local-charm-memcached build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-memcached/1/20:52
m_3hmmm... these'll clear through in a sec... sorry gang20:52
charmbotProject oneiric-local-charm-minecraft build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-minecraft/1/20:52
charmbotProject oneiric-local-charm-mongodb build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-mongodb/1/20:52
charmbotProject oneiric-local-charm-munin build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-munin/1/20:52
m_3I'll move them out to ##charmbot-test while debugging20:54
jimbaker bcsaller, can you take a look at https://code.launchpad.net/~hazmat/juju/env-from-env/+merge/93321 ?21:07
jimbakerit's really a trivial, and will help m_3 help out on his charm testing21:07
bcsallerlooking21:07
bcsallerapproved21:10
jimbakerbcsaller, thanks!21:10
hazmatm_3, yeah.. not sure that bot should ever be round these parts21:20
hazmatits rather distracting ;-)21:20
m_3hazmat: yup... any suggestions for where it should live?21:28
m_3we need ways to notify charm authors when stuff is breaking21:29
hazmatm_3, i would think email for the charm author, and either a secondary mailing list or irc channel21:31
hazmatm_3, it may not always be valid or available, but the bzr commits do contain some identity information, its somewhat freeform, but 98% it appears to be a valid email, minus some of yours ;-)21:32
hazmatm_3, better would be an explicit email/maintainer field for a charm21:33
SpamapSI actually do think the bot should be here... but only notify us of changes in state after the tests are stable.21:58
SpamapShazmat: I like having it here. This is really where users and charmers go to discuss charms. Raises awareness.21:58
hazmatSpamapS,  stable means not filling spamming the channel?22:01
SpamapShazmat: yeah I think that is more a product of the tests still being very new.22:02
SpamapSonce they're passing reliably over and over and over, having them here is a great way to raise awareness of their existence.22:02
m_3gotcha... well I'll leave them in ##charmbot-test until stable... then move them back22:09
jcastroSpamapS: we need to talk slides for our webinar tomorrow22:23
jcastrolet me rephrase, we should talk tomorrow about our slides, before the webinar rehearsal on tuesday22:23
SpamapSjcastro: thank you, heart attach averted22:24
SpamapSattack even22:24
SpamapShmm22:27
koolhead17jcastro: around22:34
jcastroyeah for a few minutes22:35
koolhead17jcastro: http://atuljha.com/blog/2012/02/24/newbies-guide-to-joining-cloud-ecosystem/22:39
koolhead17what do u think of this :P22:39
SpamapSall: I'd appreciate a final review/+1 for the tests spec here: https://codereview.appspot.com/5624044/diff/7001/source/charm-tests.rst22:49
SpamapSHmm23:15
SpamapShazmat: was looking at your 'env-from-env' branch, and thinking, what about JUJU_REPOSITORY to allow setting the repository in the env too? That would basically eliminate the need for a wrapper to run the tests (assuming we can also override the default charm namespace in a similar fashion)23:16
hazmat SpamapS yeah... that's seems pretty reasonable23:36
hazmatSpamapS, i think we had discussed with niemeyer_ possibly sticking that in environments.yaml23:37
hazmatSpamapS, i don't know that would remove the need for the wrapper23:37
hazmatSpamapS, since the wrapper is still obviating the need for specifying the explicit namespace on the cli.. ie local:charm23:37
SpamapShazmat: I also want to add a JUJU_CHARM_NAMESPACE too :)23:41
hazmatSpamapS, fail ;-)23:45
SpamapShazmat: that way the tests are really just running with a cultivated environment, rather than some special wrapper23:45
hazmatSpamapS, yeah.. that one is a little more odd, i'd really have to defer to the tastemaster23:46
SpamapSlol23:46
hazmatSpamapS, but wouldn't you have the wrapper even with that for the extra commands23:46
SpamapShazmat: removed the extra commands for now23:47
hazmatwhich i still should be a different command to avoid confusion.23:47
hazmatSpamapS, cool23:47
hazmatSpamapS, what about using environment fixtures for loading23:47
hazmati guess its equiv to the cli.. but it would give a more defined data structure for us to manipulate the cultured environment23:48
SpamapSI think both should be possible, but doing it in a static file is not helpful to automated testing... just increases the crap we have to cleanup/edit on tests.23:48
hazmatSpamapS, yeah.. its a little too manicured for ad hoc test scenario construction23:49
SpamapShazmat: eventually repository definitely should be in ~/.juju somewhere .. so people can just share that dir or environments.yaml and get the same result23:50
hazmatSpamapS, that's seems a bit like arbitrary structure23:51
SpamapShazmat: ESHORTHAND .. what?23:52

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!