[07:54] <erkules> Is there a way to manage lxc containers on different hosts? afaik local is on on machine only
[10:48] <dpb1> What am I missing here? http://paste.ubuntu.com/7043455/
[11:21] <gnuoy`> Hi I'm using 1.16.5-0ubuntu1~0.IS.12.04 and I cannot coerce juju into honouring machine constraints.
[11:21] <gnuoy`> juju add-machine --constraints="cpu-cores=2 mem=5120M root-disk=51200M"
[11:21] <gnuoy`> nova show d80fe489-1431-4615-bc86-0f4d0f1af0d4 | grep flavor
[11:21] <gnuoy`> | flavor                 | cpu2-ram2-disk50-ephemeral20 (1114)                                               |
[11:23] <davecheney> gnuoy`: so it's chosen the flavor based on the root disk size, not the memory ?
[11:23] <gnuoy`> davecheney, Hi, I'm only reaslly interested in memory, so I can drop the root-disk and retry
[11:24] <davecheney> gnuoy`: that was going to be my suggestion
[11:25] <mgz> gnuoy`: I'm looking at something like this atm
[11:25] <mgz> gnuoy`: please do retry with just the mem= constraint
[11:25] <gnuoy`> It looks like its still given me 2Gb of RAM
[11:26] <gnuoy`> mgz, will do
[11:26] <gnuoy`> I've got a vague memory of sidnei reporting a bug awhile ago on this but I couldn't find it
[11:27] <davecheney> mgz: they aren't order dependent are they ?
[11:27] <davecheney> constraints, that is
[11:27] <jamespage> hazmat, this is the other bug I've seen with 1.17.x
[11:27] <jamespage> https://bugs.launchpad.net/juju-deployer/+bug/1288685
[11:28] <gnuoy`> mgz, I have even less memory this time
[11:29] <gnuoy`> mgz, http://pastebin.ubuntu.com/7043620/
[11:31] <davecheney> gnuoy`: ram1 == 1gb ?
[11:31] <davecheney> ram2 == 2gb
[11:31] <davecheney> etc ?
[11:31] <mgz> davecheney: nope, if any are not satisfied it should refuse to start a machine
[11:31] <davecheney> mgz: /me strokes beard
[11:31] <davecheney> mine, not yours
[11:31] <gnuoy`> davecheney, I'll grab you our flavor list for completeness
[11:32] <gnuoy`> (we only have 300)
[11:32] <gnuoy`> http://pastebin.ubuntu.com/7043635/
[11:34] <davecheney> gnuoy`: right, so it shuld create a ram5 machine
[11:35] <gnuoy`> thats certainly what I was hoping for.
[11:35]  * davecheney returns to stroking beard
[11:36] <gnuoy`> does this flavor decision making process get logged anywhere ?
[11:39] <davecheney> not currently, not in a ssh -vvv way
[11:52] <mgz> gnuoy`: think I have your bug
[11:53] <mgz> gnuoy`: and a workaround
[11:53] <gnuoy`> mgz, like in a "heres a workaround sense" >
[11:53] <gnuoy`> \o/
[11:53] <mgz> try --constraints="cpu-cores=2 mem=5119M root-disk=51199M"
[11:53]  * gnuoy` does so
[11:55] <gnuoy`> mgz, looks like its still giving me 2Gb of ram
[11:55] <gnuoy`> cpu2-ram2-disk50-ephemeral20
[11:55] <mgz> okay. I may need to give you a binary with some debugging stuff in to run, I can base it on 1.16
[11:56] <mgz> the issue it, add-machine ends up with the work done on the state server
[11:56] <mgz> and I guess you don't want a funky version there
[11:56] <mgz> can you repo with `juju bootstrap` in a fresh environment name on the same deployment?
[11:57] <mgz> as in pass a reasonable mem constraint there and it bootstraps a random machine with less mem.
[11:57] <mgz> as that would be less intrusive for you to test.
[11:57] <gnuoy`> mgz, I'll find  a env to work in which is less live
[11:58] <gnuoy`> thanks
[11:58] <mgz> (this is also likely resolved in some manner on trunk - as constraints on trunk will actually refuse to pick a machine when they don't match)
[11:59] <mgz> (so, the bug likely has two parts, one being the constraint not being correctly matched to a flavour for some reason, the other being we boot a machine anyway)
[11:59] <gnuoy`> I think that is sensible behaviour fwiw
[11:59] <gnuoy`> ( the refusing to deploy bit I mean)
[12:11] <mgz> gnuoy`: rethunk
[12:12]  * gnuoy` is on tenterwhatever hooks
[12:13] <mgz> on that deployment, do `nova --debug flavor-show 1004`
[12:13] <mgz> I may need --debug on the flavor-list too, but hopefully now
[12:13] <mgz> *not
[12:14] <gnuoy`> mgz, you've got it I think
[12:15] <gnuoy`> lamont raised a bug about the flavors being inconsistent, let me find it
[12:16] <mgz> I think the name of the field we need to look at has changed from "ram"
[12:17] <gnuoy`> mgz, https://bugs.launchpad.net/nova/+bug/1183259
[12:18] <gnuoy`> https://pastebin.canonical.com/106001/
[12:19] <mgz> yeah, ram 1 would not be useful
[12:19] <gnuoy`> mgz, https://pastebin.canonical.com/106002/
[12:22] <mgz> odd that show flat out misses the memory_mb field or whereever the list is getting that value
[12:23] <mgz> I'm assuming `nova --debug flavor-list` and picking out the 1004 bit of the real json output would have the 5120 value in some field we don't know to look at
[13:22] <rick_h_> jcastro: http://comingsoon.jujucharms.com/sidebar/search/?text=bundle 5 minutes later, but loads your promulgated bundle correctly.
[13:23] <rick_h_> jcastro: so you guys can favorite those in manage now to populate the default featured section of the sidebar
[13:35] <gnuoy`> mgz, these are the 1004 references I see in nova --debug flavor-list https://pastebin.canonical.com/106020/
[13:41] <mgz> gnuoy`: hm, so there we have "ram": 5120
[13:41] <gnuoy`> err, so we do
[13:43] <mbruzek> Good morning #juju.  I have a question about juju hooks and logging.
[13:43] <gnuoy`> do flavor list and flavor show >id> don't agree
[13:43] <gnuoy`> s/do/so/
[13:43] <mbruzek> Is there a way to disable the logging of the relation-set hook logging?
[13:45] <mbruzek> A charm contributor asked this question.  He is relation-set(ting) ssh keys and does not want them in the log file.  Is there a way to turn off logging for relation-set in the hooks?
[13:46] <mgz> gnuoy`: something very odd seems to be going on
[13:46] <gnuoy`> mgz, the flavor list v flavor show is consistent with lamonts bug I think
[13:47] <vila> hi there
[13:47] <vila> deployment borked with a unit in error state and 'juju resolved <unit>' tells me: ERROR cannot set resolved mode for unit "ppa-postgres/0": already resolved
[13:47] <vila> what can I try before 'nova delete' (and is that a good idea anyway) ?
[13:48] <mgz> it's not a good idea, you can do `juju terminate-machine --force MACHINE_OF_UNIT` though
[13:48] <vila> mgz: bingo, thanks, didn't know about that one !
[13:49] <mgz> before that, you can look at the unit logs of that
[13:49] <mgz> `juju ssh ppa-postgres/0` and look in /var/log/juju/
[13:49] <mgz> the unit log should show the resolve, and whatver caused the errors
[13:49] <vila> mgz: I did, that was a fallout from a fallout from <you don't want to know>
[13:50] <vila> mgz: in a nutshell, fallout from another error on a different unit both errors being unrelated to my current hunt
[13:51] <marcoceppi> mbruzek: change the verbosity of juju? What charm is it they user is working with?
[13:52] <mbruzek> The HPCC charm author asked me this question.
[13:52] <lamont> mgz: the bug is that nova show et al totally ignore the 'deleted' field in the database table
[13:52] <mbruzek> Can you set --log-level to OOFF?
[13:53] <mbruzek> marcoceppi, I believe he just wants to not log the ssh key.  How would a hook set the verbosity level of juju?
[13:53] <marcoceppi> mbruzek: the hook doesn't the user does for the deployment
[13:53] <rick_h_> mbruzek: is there a better way to exchange the key that's not logged?
[13:53] <gnuoy`> lamont, if nova show is fibbing does that mean the resulting machine might actually be the spec I requested ?
[13:54] <rick_h_> mbruzek: can the hook trigger a functoin/call to the other service to fetch it for instance?
[13:54]  * marcoceppi bootstraps and env
[13:54] <mbruzek> He is passing the key on relation-set which seems legitimate to me?
[13:54] <lamont> gnuoy`: nova flavor-list| grep 1101 (or whatever) is truth, the other is lies
[13:54] <lamont> flavor-list honors deleted, so does boot
[13:54]  * lamont reconfirms that
[13:55] <mbruzek> Good ideas rick_h_ I can send him some alternatives like that in an email.
[13:55] <gnuoy`> lamont, yes, but if nova show says cpu2-ram2-disk50-ephemeral20 can I be confident the resulting machine really does have 2gb ?
[13:55] <rick_h_> mbruzek: yea, I mean I think it's simpler for him to work around than to do things like break logging
[13:56] <mbruzek> Well he was just asking if we could turn off logging for a hook or a few commands.
[13:56] <mbruzek> But if this key is used for authentication to the other system I don't know how they could copy it securely without the key, chicken and egg.
[13:56] <mgz> lamont: I'm still pretty confused by the behaviour here
[13:57] <marcoceppi> mbruzek: more than one way to get a file across the system
[13:57] <mgz> juju-core is looking at the "mem" field in the json output from flavors/detail
[13:58] <mbruzek> Is there an established pattern for charms to exchange keys that I can read/look at?
[13:58] <mgz> but somehow not matching reasonable looking constraints gnuoy is supplying to a correct flavor
[13:58] <mbruzek> Does anyone know a charm that does this well?
[13:59] <marcoceppi> mbruzek: everyone uses relation-set/get because that's the purpose of the command
[14:00] <hazmat> but rel get/set could take values from a file to resolve.
[14:00] <marcoceppi> hazmat: true, or a URL where the file is being served
[14:01] <hazmat> mbruzek, there are other charms that do the same fwiw re ssh key pass
[14:01] <mbruzek> hazmat, so he has to expose a service to transfer keys?
[14:02] <hazmat> mbruzek, huh.. no
[14:03] <lamont> gnuoy`: what constraints are you giving it?
[14:03] <marcoceppi> mbruzek: he he's worried about it, he shouldn't be shipping private keys, and just have a private key per unit and only ship the public key via the relation
[14:05] <mbruzek> I would have to look again, but I believe the use case is, if the user provides ssh keys it uses those and passes them to the other units in the cluster.  If no keys exist, the master generates them and passes the keys to the other units in the cluster
[14:05] <gnuoy`> lamont, mgz, ok, Juju it the one true light in a sea of lies. It is doing what I ask despite novas best attempts to derail it http://pastebin.ubuntu.com/7044246/
[14:05] <gnuoy`> mgz, thanks for looking at it with me.
[14:06] <marcoceppi> mbruzek: if the user is supplying the keys, it should be via configuration if they want the keys to be on all the units
[14:06] <marcoceppi> mbruzek: or embeded in the charm itself, in which case no need to distribute
[14:06] <marcoceppi> mbruzek: I think there are more ways to solve this problem then the route they've chosen, and are unhappy with
[14:06] <mbruzek> All good suggestions, I will write him back.
[14:06] <lamont> | flavor                 | cpu2-ram2-disk40-ephemeral20 (1113)                                               |
[14:07] <lamont> the name is a lie, the id is truthg
[14:07] <gnuoy`> lamont, what I didn;t realise was that the lies weren't just in  nova flavor-show, but nova show <instance> as well
[14:08] <lamont> gnuoy`: the lies are _EVERYWHERE_ except flavor-list and boot
[14:08] <lamont> ISTR it's a common routine that everyone calls, and boot and flavor-list are different functions
[14:08] <lamont> or something like that
[14:08] <mgz> okay, so it looks like we've lucked out
[14:08] <mbruzek> Thanks marcoceppi and rick_h_
[14:08] <mgz> flavor-list does not lie, juju only looks at flavor-list
[14:09] <gnuoy`> mgz, but I'm a happy customer
[14:09] <gnuoy`> juju is doing what I want
[14:09] <lamont> gnuoy`: I keep threatening to go do the sql surgery to force the names on the deleted flavors to match the undeleted names, just so that we quit getting wrapped around a pole
[14:09] <mgz> so, the name and show of what you get for the machine looks wrong... but you have what you asked for
[14:09] <gnuoy`> absolutely, fingers in my ears lalalala
[14:10] <gnuoy`> mgz, fwiw juju status reports the correct machine spec
[14:32] <dpb1> Hi all -- Is this a known issue? http://paste.ubuntu.com/7043455/
[14:57] <marcoceppi> dpb1: I think it is, let me check the tracker
[14:58] <dpb1> marcoceppi: just found it: https://bugs.launchpad.net/juju-core/+bug/1285901
[14:58] <_mup_> Bug #1285901: error starting unit upgrader on local provider <local-provider> <lxc> <regression> <juju-core:Fix Committed by wallyworld> <https://launchpad.net/bugs/1285901>
[15:38] <pk> is there anyone in here that is familiar with the rabbitmq charm source?
[15:41] <jamespage> pk, yes
[15:47] <pk> jamespage, I'm getting a permision denied error on the broken hook because there are 2 users, 'nagios' and 'naigos'. I searched the code, and both are present all over the place. Why?
[15:51] <jamespage> hmm
[15:52] <jamespage> pk, I noticed that - we're just going through a tidy of that codebase and some improvements to active-active HA
[15:59] <jamespage> pk, do you have the log from the permission denied error?
[16:03] <pk> jamespage, I just reset my local environment for the name change. I'll stash and try to reproduce in a minute
[16:04] <ghartmann> I am planning to buy a new server to have it set up as MASS/juju .. I wonder what would be critical to keep in mind when choosing the parts
[16:04] <ghartmann> sorry if here is not the proper channel to discuss this
[16:11] <pk> jamespage, after replacing naigos with nagios, I still get permission denied errors on the relation broken. It is possible that this is specific to my codebase
[16:11] <pk> Here's the rabbitmq unit log http://pastebin.com/UM0QarW5
[16:11] <pk> I spun it up, added a relation to my own flask charm, and then destroyed that relation
[16:12] <pk> the charms I'm using are here https://github.com/peterklipfel/firesuit/tree/master/charms
[16:23] <marcoceppi> ghartmann: if you're just getting one, that's probably not going to be enough for MAAS unless you're doing virtual maas
[16:30] <noodles775> Hi! Does anyone know how a dying-but-error-state service can be killed with juju 1.16.5, which doesn't support the --force option with destroy-service?
[16:30] <noodles775> Related to: https://bugs.launchpad.net/juju-core/+bug/1168154
[16:30] <_mup_> Bug #1168154: Destroying a service in error state fails silently <destroy-service> <juju-core:Triaged> <https://launchpad.net/bugs/1168154>
[16:31] <noodles775> But we tried resolving the unit, but apparently it's still in an error state.
[16:31] <marcoceppi> noodles775: keep running resolved, hope for the best?
[16:51] <ghartmann> marcoceppi: thanks, I will try reading about virtual maas see if clarifies it a bit for me
[16:58] <pk> marcoceppi, I'm having issues with my hooks. I can stand up the charms without errors, but when I try to relate them, my hooks are not running. However, juju status recognizes them as related. If I destroy the relation, there is a permission denied error. Do you have any thoughts or suggestions?
[16:59] <marcoceppi> pk: what does your charm look like and what version of juju are you using?
[16:59] <pk> marcoceppi, 1.16.6-precise-amd64 in local mode
[17:00] <marcoceppi> pk: what does your charm look like? is it published anywhere? I'd like to try on 1.16.6/1.17.4 to replicate the issue
[17:01] <pk> and I'm tryint to relate rabbitmq and a flask charm that I wrote. The rabbitmq relation doesn't seem to be running either
[17:01] <FunnyLookinHat> jcastro, ping ?
[17:02] <pk> marcoceppi, my charms are here
[17:02] <pk> https://github.com/peterklipfel/firesuit/tree/master/charms
[17:02] <marcoceppi> pk: cool, all are having issues?
[17:02] <pk> I'm working on rabbitmq + flask right now. I haven't tried storm+rabbitmq yet
[17:03] <marcoceppi> pk: let me spin up rabbitmq + flask and see if I can replicate this
[17:07] <lazyPower_> Hey, I just wrote a blog post on developing Juju Charms on OSX with Vagrant. Incoming screencast as soon as i'm done editing and recording the voiceover: http://blog.dasroot.net/writing-juju-charms-on-osx/
[17:07] <lazyPower_> jcastro: ^
[17:11] <pk> lazyPower, your header is taking up most of the page. It does not change when scrolling http://i.imgur.com/xESQMXP.jpg
[17:12] <pk> I'm using chrome on kubuntu 12.04
[17:13] <lazyPower_> well thats obnoxious :|
[17:14] <lazyPower_> if you reload does it still exhibit the same behavior?
[17:14] <lazyPower_> its got a parralax scroll on the header.
[17:15] <pk> lazyPower, the parallax is present, but the image is still sitting on top of everything except the navbar and the orange circle
[17:15] <marcoceppi> lazyPower_: does precise-server-cloud-im-juju have the updated kernel?
[17:16] <bloodearnest> lazyPower_: that header looks like ansible output to me :)
[17:20] <lazyPower_> bloodearnest: ansible coming from a juju charm
[17:20] <lazyPower_> marcoceppi: it does
[17:21] <lazyPower_> pk: ok, i'll take a look. can i ping you later when i've updated the theme and think i have a fix?
[17:21] <bloodearnest> lazyPower_: preach it bro. We use ansible for all our charms
[17:23] <pk> lazyPower_, sure. I have a meeting at 11, but I'll be back around noon
[17:23] <lazyPower_> pk it wont be that soon
[17:23] <lazyPower_> my work queue takes priority
[17:26] <peterklipfel> lazyPower_, absolutely
[17:30] <lazyPower_> peterklipfel: chromium or google chrome?
[17:32] <peterklipfel> lazyPower_, google chrome
[17:36] <lazyPower_> ta
[17:39] <mbruzek> Is there a way to create an ssh tunnel to a service deployed on juju?
[17:39] <lazyPower_> mbruzek: like poor man vpn style?
[17:39] <mbruzek> ssh -L 10001:localhost:10001 -L 10002:localhost:10002 <public unit address>
[17:40] <mbruzek> I tried juju ssh -L ... and I got an unrecognized -L option
[17:40] <lazyPower> i would think you' want to pull the ip from a juju status listing
[17:41] <lazyPower> instead of trying to retrofit that into a juju ssh command
[17:41] <mbruzek> I used the public IP of the unit.
[17:41] <lazyPower> is it exposed?
[17:41] <mbruzek> But normal ssh does not know where the key is to connect to juju units.
[17:41] <mbruzek> It is .
[17:41] <lazyPower> oh wait, right. ssh is always open
[17:41] <lazyPower> duh
[17:41] <mbruzek> No that is a fine question
[17:42] <lazyPower> well. i know that juju keeps the keys in ~/.juju/ssh
[17:42] <mbruzek> But when I ran ssh -L 10001:localhost:10001 -L 10002:localhost:10002 10.0.3.119
[17:42] <lazyPower> so maybe add -i ~/.juju/ssh
[17:42] <mbruzek> I got:  Permission denied (publickey)_
[17:42] <maxcan> good morning
[17:42] <lazyPower> ~/.juju/ssh/juju_id_rsa  -- rather.
[17:43] <lazyPower> maxcan: buenos dias
[17:43] <maxcan> thats one issue we have lazyPower .. juju always opens 22 to the world even if its unnecessary
[17:43] <mbruzek> ssh -L 10001:localhost:10001 -L 10002:localhost:10002 ubuntu@10.0.3.119  worked for me
[17:43] <mbruzek> Thanks lazyPower
[17:44] <lazyPower> sure! happy to try to help :)
[17:45] <maxcan> is rackspace/openstack still unsupported?
[17:45] <lazyPower> you can use the manual provider...
[17:46] <maxcan> hm.. thanks
[18:15] <peterklipfel> marcoceppi, have you had a chance to run the flask and rabbitmq charms?
[21:56] <danob> hi all
[21:59] <themonk> i am trying to deploy in amazon but getting this error: ERROR TLS handshake failed: x509: certificate signed by unknown authority
[22:00] <themonk> marcoceppi: hi
[22:03] <marcoceppi> hi themonk, that's an odd error. Is this the same computer that your originally bootstrapped the environment?
[22:03] <marcoceppi> what version are you running again? 1.17.4?
[22:06] <themonk> marcoceppi: i am in rackspace vm, first bootstrapped for local, now switched to amazon, then tyring to deploy mycharm to amazon
[22:06] <marcoceppi> themonk:  and you bootstrapped from that rackspace vm?
[22:06] <themonk> marcoceppi: yes
[22:07] <marcoceppi> themonk: so, what it sounds like is the cert that was created for the bootstrapping process isn't being loaded properly. While I've never seen this error before, I suspect that's what's going on. Could you run deploy again with --show-log --debug and pastebin the output?
[22:08] <themonk> marcoceppi: ok :)
[22:08] <themonk> marcoceppi: mu juju is 1.16
[22:08] <marcoceppi> themonk: ah, I've been using 1.17 for so long I'm not sure if this is a known bug or not. What version exactly are you on (`juju version`)
[22:09] <themonk> marcoceppi: 1.16.6-saucy-amd64
[22:09] <marcoceppi> themonk: cool, that's the latest
[22:09] <marcoceppi> well, not cool, sorry you're running in to this
[22:10] <themonk> marcoceppi: i want to ugrade in 1.17.4
[22:10] <themonk> marcoceppi: will i stay with this version?
[22:11] <themonk> marcoceppi: i mean 1.16.6-saucy-amd64?
[22:11] <marcoceppi> themonk: well, lets get the debug output first. If you're putting things in production right now I recommend staying on 1.16 as we dont' really guarentee upgrade paths for devel releases
[22:12] <themonk> marcoceppi: ok :)
[22:12] <marcoceppi> if you're just playing around and can wait a few weeks, when 1.18 is released, then I'd recommend using 1.17 since it's got a lot more features
[22:17] <themonk> marcoceppi: no i am creating a production charm
[22:18] <marcoceppi> themonk: cool, so 1.EVEN releases are our stable releases, and 1.ODD are considered development. So 1.16.6 is current stable, 1.18.0 is next stable release due out soonish
[22:18] <themonk> marcoceppi: my charm is created but now i need amazon deployment
[22:18] <marcoceppi> themonk: so, run the deploy command with --show-log and --debug flags and pastebin the output, see if I can't help you get this unblocked
[22:19] <themonk> marcoceppi: cool :) i am looking forward to 1.18 stable
[22:19] <marcoceppi> themonk: me too, the core team has been fixing a lot of bugs and creating a ton of cool features
[22:21] <themonk> marcoceppi: running this $juju deploy --show-log --debug --repository=localcharms local:precise/mycharm
[22:32] <themonk> marcoceppi: http://pastebin.com/x1n6wvtb
[22:33] <marcoceppi> thumper, since I saw you rumbling around earlier, andy idea bout this TLS handshake failed: x509: certificate signed by unknown authority ?
[22:33] <marcoceppi> themonk: the only advice I have at this point is to destroy the environment and bootstrap again, see if it persists
[22:33] <themonk> marcoceppi: there is a x.509 section in amazon security credential page
[22:33] <sarnold> that dumped the admin secret?
[22:34] <sarnold> what access does the admin secret get you?
[22:34] <marcoceppi> sarnold: ugh, it gives you access to the juju bootstrap API
[22:34] <sarnold> that thing also dumps the RSA private key?
[22:34] <sarnold> .. and "secret-key", I'm also curious about that..
[22:35] <marcoceppi> sarnold: yeah, each bootstrap generates a new ca cert
[22:35] <marcoceppi> oh shit
[22:35] <marcoceppi> themonk: I'm going to PM you in a second
[22:35] <sarnold> marcoceppi: will you file bug report or would you like me to file it?
[22:36] <themonk> marcoceppi: ok
[22:36] <marcoceppi> sarnold: well I want to go poke someone with a sharp stick first, then file a bug
[22:37]  * sarnold hands marcoceppi his stick sharpener
[22:37] <sarnold> marcoceppi: thanks :)
[22:38] <thumper> marcoceppi: no, sorry
[22:39] <marcoceppi> thumper: also, is --debug or --show-log dropping the jenv file to console?
[22:39] <thumper> --debug by the look of it
[22:39] <marcoceppi> s/dropping/outputing/
[22:39] <marcoceppi> thumper: is --debug even relavent or should I have people just run --show-log ?
[22:40] <thumper> by default, --show-log just shows the log, not change the logging setting
[22:40] <thumper> --debug == "--show-log --logging-config=juju=DEBUG"
[22:41] <marcoceppi> thumper: so, DEBUG probably shouldn't dump sensative information to terminal, or else that's going to make helping people troubleshoot a lot harder
[22:41]  * thumper nods
[22:41] <marcoceppi> I mean, I understand dumping the jenv file
[22:41] <marcoceppi> to verify settings, but either a scrubbing or something else
[22:41]  * marcoceppi files
[22:42] <thumper> there is no version shown there
[22:42] <marcoceppi> oh,t hat would probably be a lot more useful
[22:42] <marcoceppi> thumper: this is 1.16.6 though
[22:43] <thumper> ok, still not sure about the tls handshake error though
[22:43] <marcoceppi> thumper: ack, I've told themonk to change his credentials then to just try to teardown/rebootstrap
[22:43]  * thumper nods
[22:47] <themonk> marcoceppi: i did not sow that my amazon secret was printed i have deleted those keys. yes juju should not dump those :)
[22:56] <themonk> marcoceppi: trying again
[22:59] <themonk> marcoceppi: ERROR The AWS Access Key Id you provided does not exist in our records. i inserted new keys
[23:06] <themonk> marcoceppi: if old env fole is present in here .juju/environments/amazon.jenv then it gives this ERROR The AWS Access Key Id you provided does not exist in our records. after removing that file it bootstrapped