[01:19] <_mup_> Bug #939164 was filed: add support to lxc provider backend in go port <juju:New> < https://launchpad.net/bugs/939164 >
[05:35] <_mup_> juju/enhanced-relation-support r8 committed by jim.baker@canonical.com
[05:35] <_mup_> Use cases and impl details
[10:32] <xerxas> hi all
[10:32] <xerxas> hazmat: I got juju mostly working on osx, thanks to the brew formula you sent me
[10:32] <xerxas> I have a problem though
[10:32] <xerxas> » juju deploy --repository . local:cf-redis
[10:32] <xerxas> 2012-02-23 11:31:58,200 INFO Connecting to environment...
[10:32] <xerxas> 2012-02-23 11:32:02,619 INFO Connected to environment.
[10:32] <xerxas> [Errno 21] Is a directory
[10:32] <xerxas> 2012-02-23 11:32:02,822 ERROR [Errno 21] Is a directory
[10:33] <xerxas> what i did is : bzr branch lp:charms/cf-redis
[10:33] <xerxas> am I suppose to tell juju to deploy a file ?
[12:36] <benji> It is possible to stop an EC2 juju environment and then restart it later?  If so, is there a particular procedure for doing it correctly?
[12:47] <benji> It looks like it might work, but the fact that restarted machines get new IPs is keeping the zookeeper machine from being able to connect to the others.
[12:48] <benji> I wonder if it would work if I update the IPs in ZK.
[13:26] <fwereade_> xerxas, a repository is a directory containing subdirs for individual ubuntu series; the charms for a particular series go in the appropriate subdir
[13:26] <hazmat> xerxas, juju wants a particular repository structure.. ie.. try this.. mkdir  -p myrepo/oneiric && cd myrepo/oneiric && bzr branch lp:charms  && cd ..
[13:26] <fwereade_> hazmat, that's ludicrous timing
[13:26] <hazmat> whoops cd../..
[13:27] <hazmat> fwereade_, indeed it is.. serendipity.. good morning/afternoon
[13:27] <xerxas> hazmat:  I think that's pretty much what I have ...
[13:27] <fwereade_> hazmat, good afternoon/morning :)
[13:27] <xerxas> but I'll retry
[13:28] <hazmat> xerxas, try it with juju -v deploy  as well.. it should give the full traceback on errors then
[13:40] <benji> It is possible to stop an EC2 juju environment and then restart it later?  If so, is there a particular procedure for doing it correctly?
[13:52] <hazmat> benji, its not supported..
[13:52] <benji> darn
[13:52] <hazmat> benji, with the restart support fwereade_ added, its concieveable..
[13:52] <hazmat> benji, but the ip address changes are a bit more problematic
[13:53] <hazmat> hmm
[13:53] <benji> I don't think it's too far off.  The main problem would appear to be that the machine's IP addresses change.
[13:53] <benji> the machine IDs are consistent so refreshing the IPs seems plausible
[13:53] <hazmat> benji, yeah.. we could tell the agents to go fetch zk servers directly from the s3 block
[13:54] <hazmat> benji, right, but outside of the provisioning agent, none of the agent is supposed to have provider credential access
[13:54] <hazmat> so machine id by itself isn't helpful..
[13:55] <hazmat> we'd have to modify the provisioning agent to check/update those addresses in the s3 bucket at startup, which isn't so bad
[13:56] <hazmat> benji, and the actual shutdown/resume would be done out of band
[13:56] <jcastro> hazmat: #juju-dev is now logged
[13:56] <benji> unfortunately, we have a charm that takes a *long* time to set up (1+ hours), and it has to run on an m1.large instance but we don't really want to have to run it 24-7 when we're only in development
[13:57] <jcastro> have you looked at using LXC while developing?
[13:58] <benji> we've been using LXC too, but the charm uses LXC itself, and since LXC isn't turtles-all-the-way-down yet, we couldn't keep using it
[13:58] <jcastro> ah
[14:14] <hazmat> jcastro, thank you
[14:20] <gary_poster> m_3, hi.  Please consider this a gentle nag to complete our charm review. :-)  Thank you.
[14:30] <jcastro> ok guys, I've got approval to launch this bad boy: https://juju.ubuntu.com/CharmContest
[14:30] <jcastro> writing up the blog post now.
[14:31] <jcastro> so if you have buddies that want to win some Amazon gift cards ...
[14:44] <_mup_> Bug #939552 was filed: Juju should support MAAS as a provider <juju:New> < https://launchpad.net/bugs/939552 >
[15:09] <TREllis_> Hi, I'm trying to run juju with lxc on precise. I've got to the stage where it's running the lxc-start command and starting the container, but the charm (any) remains pending
[15:10] <TREllis_> after connecting with lxc-console I can see that the agent isn't running
[15:11] <TREllis_> trying to start it manually I see or looking in the /var/log/juju logs, I see that the command arguments passed to the python module are invalid... -> http://paste.ubuntu.com/854086/
[15:12] <hazmat> TREllis_, that looks like a version disconnect.. newer version on host, older version on the container.. there where some recent changes to how agents get started
[15:13] <hazmat> TREllis_, are you using the ppa on the host or precise packages?
[15:14] <TREllis_> hazmat: ah-ha.
[15:15] <hazmat> TREllis_, you can specify what the container/launched machine uses for a version with the juju-origin setting in environments.yaml..  a value of  'ppa' should do it
[15:15] <TREllis_> hazmat: I'm using precise+ppa on host, 0.5+bzr464-1juju2~precise1 which is the ppa version afaik. I see the archive has a slightly older one in precise. That'll be the issue. I guess that can be fixed by juju-origin: ppa in environments.yaml?
[15:15] <TREllis_> ah yeah that :)
[15:15] <hazmat> it should be autodetecting the value for juju-origin
[15:15] <hazmat> the correct value that is
[15:16] <TREllis_> hazmat: ah ok, right, I had it set manually to distro. My bad
[15:17] <TREllis> hazmat: thanks!
[15:25] <cliff-hm> jcastro, is the blog post published? If so, url?
[15:25] <jcastro> http://cloud.ubuntu.com/2012/02/juju-charm-contest-help-bring-free-software-into-the-cloud/
[15:25] <jcastro> https://juju.ubuntu.com/CharmContest
[15:25] <jcastro> complete rules there
[15:27] <TREllis> hazmat: yep "state: started" that was it - thanks!
[15:28] <cliff-hm> thank you :)
[15:31] <m_3> gary_poster: thanks, will do
[15:32] <SpamapS> hazmat: IMO, this is somewhat madness.. determining where to get juju from where juju was installed from. I think its time to start looking at having bootstrap freeze and distribute the version of juju on the client in file storage.
[15:32] <gary_poster> thank you
[15:33] <m_3> jcastro: whoohoo!
[15:33] <jcastro> heya m_3, no pressure on that summit charm, hahah
[15:33] <m_3> jcastro: unhuh... :)
[15:35] <hazmat> SpamapS, i don't think its madness, but distributing juju via provider storage has a lot of merit as well for cross distro release environment upgrades
[15:36] <jcastro> http://news.ycombinator.com/item?id=3625153
[15:36] <jcastro> shooting for my first submission to HN!
[15:38] <SpamapS> hazmat: well at least if we do use apt to try and determine where juju was installed from, we should pin the version
[15:38] <hazmat> SpamapS, and if that version isn't avail anymore?
[15:38] <SpamapS> hazmat: you get a clear failure
[15:38] <SpamapS> instead of this "oops something weird went wrong"
[15:39] <SpamapS> ERROR: your client version is not available in your specified juju-origin (ppa)
[15:39] <hazmat> SpamapS, you don't though.. you just get a started machine without a running agent.. this applies to the provisioning agent launching machines for new service units as well
[15:39] <SpamapS> hazmat: no I want the client to ERROR
[15:39] <hazmat> i guess that doesn't apply if its already in storage
[15:40] <marcoceppi> Upvoted!
[15:40] <hazmat> but for apt cases it does
[15:40] <SpamapS> hazmat: right... so much simpler if we make juju responsible for deploying itself
[15:43] <SpamapS> hazmat: I acknowledge that this is *not* a tiny or super simple change... but I think its going to be necessary.
[15:44] <hazmat> SpamapS, that's not making 12.04..
[15:45] <SpamapS> oh snap... I just saw an ad for season 2 of game of thrones and got goose bumps
[15:45] <SpamapS> <--- nerd
[15:46] <SpamapS> hazmat: 12.04 is *supposed* to be done.
[15:46] <SpamapS> hazmat: feature freeze was a week ago. The release team can tell us togo to hell at this point. ;)
[17:09] <charmbot> Project oneiric-ec2-charm-bitlbee build #1: FAILURE in 1.3 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-ec2-charm-bitlbee/1/
[17:10] <charmbot> Project oneiric-local-charm-bitlbee build #1: FAILURE in 1.8 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-local-charm-bitlbee/1/
[17:23] <jcastro> robbiew: I love how no one noticed that mmims shaved the mountain man face.
[17:23] <robbiew> lol...eh...I've seen him both ways....now if he shaved HIS HEAD, I'd noticed
[17:24] <jamespage> m_3: love the charmbot!
[17:26] <m_3> jamespage: ha!  I was just looking at the time to see if I could catch you
[17:26] <jamespage> m_3, not for long...
[17:26] <m_3> I'm getting build-publisher login barfs... let's go PM
[17:30] <m_3> BTW, there'll be some noise in the channel for a bit while testing tests
[17:30] <charmbot> Project oneiric-ec2-charm-bitlbee build #2: STILL FAILING in 0.88 sec: http://ec2-67-202-62-13.compute-1.amazonaws.com:8080/job/oneiric-ec2-charm-bitlbee/2/
[17:36] <jiltdil> Please tell me  what is charm contest?
[17:36] <m_3> jiltdil: https://juju.ubuntu.com/CharmContest
[17:37] <jiltdil> m_3,  Thanks
[17:39] <m_3> sure thing
[17:40] <jiltdil> ok
[17:50] <charmbot> Project oneiric-ec2-charm-bitlbee build #4: STILL FAILING in 0.36 sec: https://jenkins.qa.ubuntu.com/job/oneiric-ec2-charm-bitlbee/4/
[17:54] <SpamapS> charmbot: YOU ARE SO AWESOME!
[17:54] <charmbot> SpamapS you may not issue bot commands in this chat!
[17:55] <SpamapS> tho.. a bit cold
[17:56] <m_3> SpamapS: :)
[17:56] <m_3> SpamapS: we can enable kick-offs through the bot too
[17:57] <SpamapS> "...we are being robbed of an art form as ancient as computers themselves."
[17:57] <SpamapS> http://tatiyants.com/devops-is-ruining-my-craft/
[17:57] <SpamapS> a must read for anyone
[18:00] <m_3> nice
[18:06] <koolhead17> gosh we have a bot here now in channel?
[18:07] <m_3> koolhead17: yup, another one... mup's for bugs, charmbot's for tests
[18:07]  * koolhead17 says hi to charmbot
[18:07] <koolhead17> m_3: cool!!
[18:08] <m_3> the links are still not live, but they should be tomorrow europe-time
[18:45] <m_3> hazmat: frickin love snapshot restore!!
[18:46] <hazmat> m_3, nice!
[18:47] <hazmat> m_3, i was figuring we could make some minor additions to the loader to have it work against status output would also be a nice tool
[18:48] <hazmat> m_3, the only major limitation i can think of on the snapshot restore, is if you start having relations between services created after the snapshot and those before, then it breaks isolation
[18:49] <m_3> hazmat: and sort of an impl of stacks :)
[18:49] <m_3> right
[18:49] <hazmat> m_3, i don't know what those are ;-)
[18:49] <m_3> gotta snapshot early in the process
[18:50] <hazmat> m_3, yeah.. it probably makes sense to have a reset option just to go back to pristine..
[18:50] <m_3> but that really does smell like a set of staged commits... that can be rolled back even
[18:50] <hazmat> which is actually the more common usage for testing
[18:50] <m_3> hazmat: true
[18:50] <m_3> you want MPs or you wanna make it owned by a group instaed of just you?
[18:51] <m_3> (path issues)
[18:51] <m_3> loaders trying to read plans from a different place than the planner put them
[18:52] <m_3> I can split it out into separate commands... but...
[19:28] <hazmat> jamespage, those charms are looking pretty good
[19:28]  * hazmat pokes out at the hbase and zk charms
[19:29] <hazmat> jamespage, just curious if you where going to split out the hbase into hdfs nodes as well (which is already extant i thought).. or does it make more sense to just run those inline for locality
[19:29] <hazmat> m_3, mps
[19:30] <hazmat> m_3, or a group is fine..
[19:30] <hazmat> m_3, any suggestions on group?
[19:30] <hazmat> maybe juju-jitsu
[19:31] <jamespage> hazmat: I was pondering the same thing myself (splitting the hdfs nodes)
[19:31] <jamespage> that way they could also be shared with mapreduce.
[19:32] <jamespage> hazmat: rolling restarts of the hbase-slave nodes is not great ATM - I'm working around the issue with a staggered sleep to stop them loosing the ROOT and META regions.
[19:33] <hazmat> jamespage, hmm.. yeah.. we need a good recipe for doing that with peer relations to coordinate a rolling op
[19:40] <m_3> hazmat: average cluster size is 140 datanodes... goes up to thousands... peer relations would just explode
[19:40] <m_3> hazmat: juju-jitsu's fine for charmrunner
[19:43] <jamespage> m_3: I think thats where its important to break a cluster up into multiple service instances
[19:43] <jamespage> datacluster1, datacluster2 etc...
[19:43] <m_3> right
[19:43] <jamespage> they can all still talk to one master service
[19:43] <m_3> exactly... that should scale nicely
[19:43] <m_3> even to thousands of nodes
[19:43] <jamespage> but it allows you to manage them as descrete regions in a deployment
[19:44] <m_3> and roll
[19:45] <m_3> not sure what the price would be as far as data integrity...
[19:45] <m_3> first thought is to shard across these datasets,
[19:45] <m_3> but then that wouldn't really support a rolling upgrade
[19:45] <m_3> we need mirroring too
[20:03] <hazmat> m_3, explode is a strong word.. they'd  work fine, but they'd be chatty, but if your talking about lock step distributed coordination.. it is what it is.
[20:03] <m_3> hazmat: :)
[20:04] <hazmat> m_3, it might be a little nicer with an explicit leader election which would be alot less chatty.. but even then we'd need to communicate that and allow for the next leader, which would again be via peer rel data
[20:06] <m_3> hazmat: would love to see that realized with a single leader
[20:06] <m_3> or ideally cascading groups of local leaders
[20:07] <m_3> but it gets pretty big pretty quick if pure peers have to talk to everybody else
[20:10] <m_3> hazmat: the good news is that there's nice global seqn number that's ordered and easily divided
[20:11]  * m_3 flashbacks to PE classes... duck duck goose
[20:34] <hazmat> m_3, that works for a more static leader based on the presence/availability of a node, but for a transient leader, effectively a lock around a rolling operation, feels like it wants for a different primitive.
[20:40] <charmbot> Yippie, build fixed!
[20:40] <charmbot> Project oneiric-local-charm-bitlbee build #2: FIXED in 42 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-bitlbee/2/
[20:47] <m_3> whoohoo!
[20:50] <charmbot> Project oneiric-local-charm-byobu-classroom build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-byobu-classroom/1/
[20:50] <charmbot> Project oneiric-local-charm-cassandra build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cassandra/1/
[20:50] <charmbot> Project oneiric-local-charm-ceph build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-ceph/1/
[20:50] <charmbot> Project oneiric-local-charm-cf-mongodb build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-mongodb/1/
[20:50] <charmbot> Project oneiric-local-charm-cf-mysql build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-mysql/1/
[20:50] <charmbot> Project oneiric-local-charm-cf-redis build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cf-redis/1/
[20:50] <charmbot> Project oneiric-local-charm-cloudfoundry-server build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cloudfoundry-server/1/
[20:50] <charmbot> Project oneiric-local-charm-cloudfoundry-server-dea build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-cloudfoundry-server-dea/1/
[20:50] <charmbot> Project oneiric-local-charm-couchbase build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-couchbase/1/
[20:50] <charmbot> Project oneiric-local-charm-couchdb build #1: FAILURE in 6.1 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-couchdb/1/
[20:51] <charmbot> Project oneiric-local-charm-etherpad-lite build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-etherpad-lite/1/
[20:51] <charmbot> Project oneiric-local-charm-ganglia build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-ganglia/1/
[20:51] <charmbot> Project oneiric-local-charm-hadoop-mapreduce build #1: FAILURE in 6.5 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-mapreduce/1/
[20:51] <charmbot> Project oneiric-local-charm-hadoop-master build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-master/1/
[20:51] <charmbot> Project oneiric-local-charm-hadoop-slave build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-hadoop-slave/1/
[20:51] <charmbot> Project oneiric-local-charm-haproxy build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-haproxy/1/
[20:51] <charmbot> Project oneiric-local-charm-jenkins build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-jenkins/1/
[20:51] <charmbot> Project oneiric-local-charm-jenkins-slave build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-jenkins-slave/1/
[20:51] <charmbot> Project oneiric-local-charm-limesurvey build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-limesurvey/1/
[20:52] <charmbot> Project oneiric-local-charm-mediawiki build #1: FAILURE in 6.7 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-mediawiki/1/
[20:52] <charmbot> Project oneiric-local-charm-membase build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-membase/1/
[20:52] <charmbot> Project oneiric-local-charm-memcached build #1: FAILURE in 6.4 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-memcached/1/
[20:52] <m_3> hmmm... these'll clear through in a sec... sorry gang
[20:52] <charmbot> Project oneiric-local-charm-minecraft build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-minecraft/1/
[20:52] <charmbot> Project oneiric-local-charm-mongodb build #1: FAILURE in 6.2 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-mongodb/1/
[20:52] <charmbot> Project oneiric-local-charm-munin build #1: FAILURE in 6.3 sec: https://jenkins.qa.ubuntu.com/job/oneiric-local-charm-munin/1/
[20:54] <m_3> I'll move them out to ##charmbot-test while debugging
[21:07] <jimbaker>  bcsaller, can you take a look at https://code.launchpad.net/~hazmat/juju/env-from-env/+merge/93321 ?
[21:07] <jimbaker> it's really a trivial, and will help m_3 help out on his charm testing
[21:07] <bcsaller> looking
[21:10] <bcsaller> approved
[21:10] <jimbaker> bcsaller, thanks!
[21:20] <hazmat> m_3, yeah.. not sure that bot should ever be round these parts
[21:20] <hazmat> its rather distracting ;-)
[21:28] <m_3> hazmat: yup... any suggestions for where it should live?
[21:29] <m_3> we need ways to notify charm authors when stuff is breaking
[21:31] <hazmat> m_3, i would think email for the charm author, and either a secondary mailing list or irc channel
[21:32] <hazmat> m_3, it may not always be valid or available, but the bzr commits do contain some identity information, its somewhat freeform, but 98% it appears to be a valid email, minus some of yours ;-)
[21:33] <hazmat> m_3, better would be an explicit email/maintainer field for a charm
[21:58] <SpamapS> I actually do think the bot should be here... but only notify us of changes in state after the tests are stable.
[21:58] <SpamapS> hazmat: I like having it here. This is really where users and charmers go to discuss charms. Raises awareness.
[22:01] <hazmat> SpamapS,  stable means not filling spamming the channel?
[22:02] <SpamapS> hazmat: yeah I think that is more a product of the tests still being very new.
[22:02] <SpamapS> once they're passing reliably over and over and over, having them here is a great way to raise awareness of their existence.
[22:09] <m_3> gotcha... well I'll leave them in ##charmbot-test until stable... then move them back
[22:23] <jcastro> SpamapS: we need to talk slides for our webinar tomorrow
[22:23] <jcastro> let me rephrase, we should talk tomorrow about our slides, before the webinar rehearsal on tuesday
[22:24] <SpamapS> jcastro: thank you, heart attach averted
[22:24] <SpamapS> attack even
[22:27] <SpamapS> hmm
[22:34] <koolhead17> jcastro: around
[22:35] <jcastro> yeah for a few minutes
[22:39] <koolhead17> jcastro: http://atuljha.com/blog/2012/02/24/newbies-guide-to-joining-cloud-ecosystem/
[22:39] <koolhead17> what do u think of this :P
[22:49] <SpamapS> all: I'd appreciate a final review/+1 for the tests spec here: https://codereview.appspot.com/5624044/diff/7001/source/charm-tests.rst
[23:15] <SpamapS> Hmm
[23:16] <SpamapS> hazmat: was looking at your 'env-from-env' branch, and thinking, what about JUJU_REPOSITORY to allow setting the repository in the env too? That would basically eliminate the need for a wrapper to run the tests (assuming we can also override the default charm namespace in a similar fashion)
[23:36] <hazmat>  SpamapS yeah... that's seems pretty reasonable
[23:37] <hazmat> SpamapS, i think we had discussed with niemeyer_ possibly sticking that in environments.yaml
[23:37] <hazmat> SpamapS, i don't know that would remove the need for the wrapper
[23:37] <hazmat> SpamapS, since the wrapper is still obviating the need for specifying the explicit namespace on the cli.. ie local:charm
[23:41] <SpamapS> hazmat: I also want to add a JUJU_CHARM_NAMESPACE too :)
[23:45] <hazmat> SpamapS, fail ;-)
[23:45] <SpamapS> hazmat: that way the tests are really just running with a cultivated environment, rather than some special wrapper
[23:46] <hazmat> SpamapS, yeah.. that one is a little more odd, i'd really have to defer to the tastemaster
[23:46] <SpamapS> lol
[23:46] <hazmat> SpamapS, but wouldn't you have the wrapper even with that for the extra commands
[23:47] <SpamapS> hazmat: removed the extra commands for now
[23:47] <hazmat> which i still should be a different command to avoid confusion.
[23:47] <hazmat> SpamapS, cool
[23:47] <hazmat> SpamapS, what about using environment fixtures for loading
[23:48] <hazmat> i guess its equiv to the cli.. but it would give a more defined data structure for us to manipulate the cultured environment
[23:48] <SpamapS> I think both should be possible, but doing it in a static file is not helpful to automated testing... just increases the crap we have to cleanup/edit on tests.
[23:49] <hazmat> SpamapS, yeah.. its a little too manicured for ad hoc test scenario construction
[23:50] <SpamapS> hazmat: eventually repository definitely should be in ~/.juju somewhere .. so people can just share that dir or environments.yaml and get the same result
[23:51] <hazmat> SpamapS, that's seems a bit like arbitrary structure
[23:52] <SpamapS> hazmat: ESHORTHAND .. what?