[14:50] <timrc> Does the config option need to be defined in a yaml file before it can be settable?  I'm charming a service with nested configuration options.  I have a "channels" config option which is a coma-delimited list encapsulated in a string but for each channel I would like to have a "<channel>_devices" config option which encapsulated a coma delimited list inside a string
[14:51] <mgz> timrc: yes
[14:52] <mgz> everything should be in config.yaml so you can't have pseudo-lists
[14:53] <timrc> mgz, Can I specify multiple config yamls? (e.g. config.yaml has the base config and then a local.yaml which not part of the charm but is used to deploy a specific instance/set of instances)
[14:53] <timrc> er as the base config*
[14:53] <mgz> no.
[14:54] <mgz> you can just be cleverer about what you're packing into your string
[14:54] <timrc> mgz, ugh
[14:55] <mgz> there might be a neater way of doing what you're attempting, that some experienced charmer could explain if you give the whole problem
[14:59] <timrc> mgz, I'm just going to use a ConfigParser-type config file, point to it in the config.yaml, and have my config-changed script read and Do The Right Thing (tm)
[15:00] <mgz> timrc: config.yaml just describes the config
[15:00] <mgz> your charm needs to get the values out when hooks run using the juju commands
[15:16] <lazypower> marcoceppi: im' getting a key error on admin-secret, i'm in the local environment for juju, and the trace is: http://paste.ubuntu.com/6750920/
[15:17] <lazypower> this is regarding an amulet test - btw. not juju core, i should specify that.
[15:17] <marcoceppi> lazypower: yeah, this is a deployer error, do you have an admin-secret key set for your local environment?
[15:18] <lazypower> i dont want to answer that...
[15:18] <lazypower> i do not (facepalm)
[15:19] <marcoceppi> admin-secret is used by deployer because it's the password for gui
[15:19] <marcoceppi> that's why deployer is failing, since amulet uses deployer to drive deployments you have your answer
[15:19]  * marcoceppi makes note to make deployer fails nicer
[15:19] <lazypower> That i knew. My environments.yaml has changed from what i remember it being, or I'm thnking of my juju lab. 1 sec while i run this down.
[15:23] <jcastro> man, what happened to the queue
[15:23] <jcastro> we were doing so well!
[15:27] <marcoceppi> jcastro: all in due time my friend
[16:09] <sinzui> marcoceppi, jcastro: How do I bootstrap juju on trusty. I know how to make a charm go to trusty, but not bootstrap the state-server on trusty
[16:14] <jcastro> the bootstrap node is trusty
[16:17] <lazypower> man I think i just hozed my juju installation :| I've got a leftover in here somewhere after downgrading from 1.17 thats breaking my local provider
[16:17] <lazypower> ERROR TLS handshake failed: x509: certificate signed by unknown authority
[16:21] <marcoceppi> lazypower: delete ~/.juju/environments/local.jenv
[16:21] <marcoceppi> sinzui: default-series: trust?
[16:22] <marcoceppi> I've never tried to make a trusty bootstrap node
[16:25] <sinzui> marcoceppi, yeah, that is the only way I know too
[16:25] <marcoceppi> sinzui: did it not work?
[16:26] <sinzui> marcoceppi, we haven't checked yet. actually, I do know that method works. We wanted to use an existing env in yaml, but just change the series
[17:25] <arosales> sinzui, does bug 1254401 only affect 1.17?
[17:25] <_mup_> Bug #1254401: error reading from streams.canonical.com <bootstrap> <juju-core:In Progress by wallyworld> <https://launchpad.net/bugs/1254401>
[17:25] <arosales> sinzui, specifically is 1.16 not effected
[17:26] <sinzui> arosales, it affects 1.16+ it will be fixed when we deploy to the site
[17:27] <arosales> sinzui, so all deploys are dead in the water this whole week?
[17:27] <sinzui> arosales, there is nothing for a developer to do the bug cannot be in progress, or if it is, utlemming is working on it
[17:27] <sinzui> arosales, the site has  never been up
[17:27] <arosales> I am just wondering if folks know the severity
[17:27] <sinzui> arosales, the bug was filed months ago
[17:27] <arosales> sinzui, understood but 1.16 worked before .  .
[17:28] <sinzui> arosales, juju always tries the tools-url, then streams.canonical.com then aws
[17:28] <sinzui> oh, it looks in the control bucket first
[17:29] <arosales> sinzui, oh so deploys should still work after streams.c.c fails
[17:30] <arosales> sinzui, sorry I thought deploys were broken all together
[17:30] <sinzui> arosales, CI has never failed because the site doesn't exist
[17:30] <arosales> sinzui, whew
[17:30] <arosales> sinzui, I understand now, thanks for clearing that up.
[17:30] <sinzui> arosales, log show a a lot of hate, then juju comes to terms with the issue and moves on
[17:31] <arosales> sinzui, gotcha and thanks for the education
[17:32] <arosales> mbruzek, note your hp deploy should succeed after the errors ^
[17:33] <mbruzek> right according to the workaround in the bug 1254401 I was able to get this to work with the following
[17:33] <_mup_> Bug #1254401: error reading from streams.canonical.com <bootstrap> <juju-core:In Progress by wallyworld> <https://launchpad.net/bugs/1254401>
[17:33] <mbruzek> juju bootstrap -e hpcloud --upload-tools
[17:37] <sinzui> mbruzek, no need to that
[17:37] <mbruzek> no?
[17:38] <mbruzek> juju bootstrap -e hpcloud ended in Error the first time I tried it
[17:38] <mbruzek> juju sync-tools didn't work either.  Same error
[17:39] <sinzui> mbarnett, each CPC has a local copy of tools. Looks like Juju doesn't check for the CPC tools, but you can tell it to
[17:39] <sinzui> mbruzek, ^ add tools-url: https://region-a.geo-1.objects.hpcloudsvc.com/v1/60502529753910/juju-dist/tools
[17:39] <mbruzek> I was only able to get it to work with the --upload-tools
[17:39] <sinzui> mbruzek, I understand that feature will be removed in the future, but will work for a few more months. it is often abused
[17:40] <mbruzek> OK
[17:40] <sinzui> mbruzek, tools-url will be renamed tools-metadata-url. It will always to the right thing and do it quickly
[17:41] <mbruzek> OK so add tools-metadata-url: https://... to my environments.yaml file under hpcloud?  Or tools-url?  This is for trusty and 1.17?
[17:44] <mbruzek> adding tools-url
[18:00] <jcastro> mbruzek, hey so devel release questions will be closed on askubuntu, things like that where stuff is fast moving should be on irc or the mailing list.
[18:01] <mbruzek> I was trying to document this for others, and build rep.
[18:02] <jcastro> yeah that won't work. :)
[18:08] <lazypower> jcastro: should this be removed then? http://askubuntu.com/questions/405472/how-can-i-downgrade-my-juju-revision
[18:08] <jcastro> I would drop the specific versions
[18:08] <jcastro> and make it more generic
[18:08] <jcastro> "How do I revert from a development version of Juju to a production one?"
[18:10] <lazypower> Thanks. Completed.
[18:53] <marcoceppi> hey, juju-core guys, what does JUJU_HOME expect as a value? With or without the .juju directory? natefinch?
[18:55] <natefinch> marcoceppi: with
[18:55] <marcoceppi> natefinch: ta!
[18:55] <natefinch> marcoceppi: /home/foo/.juju is the default value for JUJU_HOME
[18:56] <marcoceppi> natefinch: that's what I was looking for, awesome
[18:56] <natefinch> i.e., if you set it, we don't put a .juju folder in the folder we specify, we just dump data directly in that directory
[18:57] <natefinch> s/we specify/you specify
[19:40] <marcoceppi> mbruzek: I couldn't get hpcloud to bootstrap with tools-url, were you able to get a bootstrap?
[19:40] <mbruzek> Yes only with the --upload-tools
[19:41] <mbruzek> It was already running from that, so my tools-url is untested
[19:42] <marcoceppi> mbruzek: nvm, I got it working had to remove the .jenv file
[19:43] <jcastro> marcoceppi, the postgres review wasn't as heavy as I thought it would be
[19:43] <jcastro> so I just pushed
[19:43]  * jcastro cowboys
[19:44] <marcoceppi> jcastro: yeah, stub does a great job maintaining the charm already
[19:51] <marcoceppi> _thumper_: did the default behavior of destroy-environment change from using a -e flag to just a parameter?
[19:51] <marcoceppi> I could have sworn it used to be -e flag
[19:51] <thumper> marcoceppi: yes
[19:51] <thumper> marcoceppi: the name of the environment is now always needede
[19:51] <marcoceppi> thumper: y u do dis
[19:51] <thumper> marcoceppi: no, nate
[19:52] <thumper> marcoceppi: primary reason was to make someone type 'juju destroy-environment production'
[19:52] <marcoceppi> and just like that, half of my scripts broke because -e is no longer a valid flag
[19:52] <thumper> instead of just getting the name from the environment
[19:52] <marcoceppi> https://juju.ubuntu.com/docs/charms-destroy.html#destroy-environments needs updating too
[19:53] <marcoceppi> thumper: when did this change land? or is it a 1.17 thing?
[19:53] <marcoceppi> I can't recall this being an issue in 1.16
[19:53] <thumper> marcoceppi: I think so
[19:54] <marcoceppi> ohhh, this will be tricky
[19:54] <marcoceppi> can I file a regression that -e should be valid with no parameters passed for env?
[19:54] <marcoceppi> thumper: as in, do you think that would be addressed by next stable?
[19:55] <marcoceppi> or, is it even possible to do
[19:55] <thumper> anything is possible, just not simple
[19:55] <marcoceppi> is it simple* to do?
[19:55] <thumper> no
[19:55] <thumper> well... kinda
[19:56] <thumper> marcoceppi: feel free to file the bug, and we can discuss it
[19:56] <marcoceppi> thumper: Okay, I'll file it as a bug then and hope it gets prioritized for a 1.18 cut
[19:56] <marcoceppi> thumper: thanks o/
[19:56] <thumper> np
[20:01] <_mup_> Bug #1269119 was filed: destroy-environment no longer accepts an -e flag <pyjuju:New> <https://launchpad.net/bugs/1269119>
[20:02] <marcoceppi> damnit, wrong project
[20:17] <lazypower> maxcan: you around?
[20:18] <maxcan> I am
[20:18] <maxcan> whats up?
[20:18] <maxcan> you're question has the horrifying implication that I'm considered someone who knows something about juju
[20:21]  * lazypower grins
[20:22] <lazypower> You're doing docker + juju "loose integration" in your production stack arent you?
[20:22] <maxcan> what do you mean by "loose"
[20:23] <maxcan> we have some charms that where we use a binary that we  "ship" as docker images
[20:23] <maxcan> the charm downloads and runs that docker image
[20:23] <maxcan> if thats what you mean
[20:23] <maxcan> i've thought about writing a generic docker image charm
[20:24] <maxcan> but it wouldn't work for us since we require authnetication to d.load the charm etc
[20:24] <mbruzek> Writing an amulet deploy test, I got an error when trying to create a relation
[20:24] <mbruzek> ValueError: Can not relate, service not deployed yet
[20:25] <marcoceppi> mbruzek: can you pastebin the test?
[20:25] <lazypower> maxcan: ah ok. I thought i saw environment exports for the config and relation changes
[20:26] <lazypower> mind you that was what, a week ago at 11pm EST :)
[20:26] <mbruzek> http://pastebin.ubuntu.com/6752404/
[20:26] <mbruzek> marcoceppi, ^
[20:26] <marcoceppi> mbruzek: you reference rabbitmq but you deploy rabbitmq-server
[20:27] <maxcan> lazypower: you mean on the MMS charm?
[20:27] <mbruzek> Thanks marcoceppi
[20:27] <marcoceppi> mbruzek: either change the add line to d.add('rabbitmq-server', 'rabbitmq') or change the d.relate to use the rabbitmq-server name
[20:27] <maxcan> lazypower: oh, for our main charm?  yes, we export env vars into hte docker container with -e
[20:27] <mbruzek> If I had changed the name on the deploy that would have worked
[20:27] <maxcan> s/main/docmunch-yesod
[20:28] <marcoceppi> mbruzek: actually, it's d.add('rabbitmq', charm='rabbitmq-server')
[20:28] <marcoceppi> sorry about that
[20:28]  * marcoceppi needs to read his own docs
[20:31] <maxcan> lazypower: i was about to step out for lunch, anything i can help with before I go?
[20:32] <lazypower> That was it my man. I had a cursory question about your docker usage with juju
[20:32] <lazypower> Thanks!
[20:32] <maxcan> np
[20:43] <marcoceppi> thumper: the output a user gets during bootstrap,that's you, right?
[20:43] <thumper> marcoceppi: as in, did I do that?
[20:43] <thumper> no, that was axw
[20:43] <marcoceppi> yeah
[20:43] <marcoceppi> aw, next time I see him, let him know it's really nice
[20:43] <marcoceppi> don't have to scratch my head anymore
[20:44] <thumper> \o/
[21:03] <dpb1> marcoceppi: hey, did you get a chance to review that storage charm? :)
[21:06] <lazypower> dpb1: he did, he's pending a paper writeup for you.
[21:06] <dpb1> oh no
[21:07] <dpb1> lazypower: I responded to your review comments on the swap charm, thx. :)
[21:07] <lazypower> and its not "oh no" :) you're doing some interesting new stuff in there
[21:08] <dpb1> lazypower: yes, agreed.  We are eagerly anticipating the results.
[21:10] <lazypower> Thank you for submitting the swap fixes so quickly!
[21:10] <lazypower> I'll try to get that re-reviewed by tomorrow morning, bit backed up atm
[21:11] <dpb1> lazypower: no worries, that one is pretty simple.
[21:23] <mbruzek> OK I have a new amulet test written ( http://pastebin.ubuntu.com/6752656/ )with the relations corrected.  When I run the test I see it getting started on the hpcloud but I run into this error.
[21:23] <mbruzek> AttributeError: 'module' object has no attribute 'CalledProcess'
[21:24] <marcoceppi> mbruzek: can you provide the full trace?
[21:25] <mbruzek> http://pastebin.ubuntu.com/6752675/
[21:25] <mbruzek> my admin-secret is not setup correctly?
[21:27] <mbruzek> We need that for the local one, but I set this environment up for hpcloud where juju uses the keypair
[21:27] <mbruzek> So set something for admin-secret and do I need to rebootstrap?
[21:27] <marcoceppi> mbruzek: all environment s need an admin secret
[21:28] <marcoceppi> yes
[21:28] <mbruzek> OK