[00:05] <stokachu> marcoceppi: re: bundles, ok thanks
[00:39] <sinzui> hi adam_g
[00:43] <adam_g> sinzui, are 1.16.2 tools expected to be getting pulled when bootstrapping 1.16.2?
[00:44] <sinzui> adam_g, they will be...do you have 1.16.2 client?
[00:44] <adam_g> sinzui, yes
[00:44] <adam_g> sinzui, for now i am --upload-tools
[00:45] <sinzui> adam_g, I am constructing the tools now. you have got the client in during that awkward window of tools being built
[00:45] <adam_g> sinzui, ok, np. just curious
[00:45] <sinzui> adam_g, I think I will have the tools in place in 1h and that is not the cider talking
[00:45] <marcoceppi> sinzui: you have to build tools post ppa build?
[00:46] <marcoceppi> Oh, is this why you guys want to use a staging ppa?
[00:46] <sinzui> marcoceppi, yes, out builders make all the series packages, I pull every one down and extract the jujud then publish them to every cpc
[00:46] <sinzui> yep
[00:46] <marcoceppi> sinzui: gotchya
[00:46] <marcoceppi> that is a bit awkward
[00:47] <sinzui> but I don't control trusty which has been in the wild for a few hours
[00:47] <sinzui> yes indeed
[00:57] <marcoceppi> thumper-afk: is there documentation on changing the logging level?
[00:58] <sarnold> marcoceppi: how about this? https://lists.ubuntu.com/archives/juju/2013-September/002998.html
[00:59] <marcoceppi> ah, there it is, we need to document that in the docs. I couldn't find it in any of the juju help topics
[01:31] <thumper> marcoceppi: 'juju set-env logging-config="<root>=INFO;juju=DEBUG;juju.provisioner=TRACE"'
[01:31] <thumper> marcoceppi: but really it is for devs,
[01:32] <thumper> I have a plan for more user related stuff
[01:32] <marcoceppi> thumper: oh, okay
[01:32]  * marcoceppi nods
[02:11] <marcoceppi> ugh, I'm stuck in lxc-console and Ctrl+a q is not working
[02:11] <marcoceppi> how do I close console?
[02:12] <marcoceppi> thumper, maybe you know?
[02:13] <thumper> um...
[02:13]  * thumper looks
[02:13] <thumper> marcoceppi: nope
[02:13] <thumper> kill the terminal
[02:13] <thumper> and then run
[02:14] <thumper> that's what I'd do
[02:14] <marcoceppi> arg, I don't want to lose this window. Curse you -d flag! Why are you not a default
[04:11] <ekacnet> marcoceppi: yes
[04:11] <ekacnet> (with some delay in the answer)
[05:39] <marcoceppi> ekacnet: yes to what?
[05:40] <marcoceppi> ekacnet: Oh, saucy manual provider
[05:40] <marcoceppi> ekacnet: you've hitting this bug: https://bugs.launchpad.net/juju-core/+bug/1246336
[05:40] <_mup_> Bug #1246336: Manual provider fails when trying to bootstrap a Saucy machine <landscape> <juju-core:Confirmed> <https://launchpad.net/bugs/1246336>
[06:23] <ekacnet> marcoceppi: thanks
[07:16] <MiteshShah> ERROR cannot start bootstrap instance: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT
[07:16] <MiteshShah> Juju need a maas installed server or normal ubuntu server
[07:17] <MiteshShah> any one help me
[13:47] <sodre> I have a question on how Juju sets the hostname in OpenStack
[13:47] <marcoceppi> sodre: I'm pretty sure it doesn't, I think that's what openstack does.
[13:47] <sodre> in particular, after booting up I end with a /etc/hosts that does not include the machine name at 127.0.1.1
[13:47]  * marcoceppi checks
[13:51] <sodre> marcoceppi: thanks for checking. As a work-around, do you know how I can force a /etc/hosts file to be written during cloud-init ? In particular for the instances brought up by juju ?
[13:53] <marcoceppi> sodre: so yeah, it looks like the provider sets the hostname for openstack. I'm still trying to figure it out for sure. rogpeppe could you confirm?
[13:54] <context> marcoceppi, thumper: 'might' have found my issue
[13:54] <sodre> marcoceppi: provider == openstack, right ?
[13:54] <marcoceppi> sodre: yes, the provider in this story is openstack
[13:54] <marcoceppi> sodre: so on HP cloud, it works properly
[13:54] <marcoceppi> context: oh, what was it?
[13:55] <context> 2013-11-01 13:55:18 INFO juju.state open.go:68 opening state; mongo addresses: ["c1210-1595.cloudatcost.com.:37017"];
[13:55] <sodre> marcoceppi: okay. I'm running with Havana & neutron. Do you have any deployments along those lines ?
[13:55] <context> well, my genius host, has ip reverse to that hostname
[13:55] <context> but that hostname resolves to a totally different ip
[13:56] <marcoceppi> context: oh, that's a good idea
[13:56] <context> updated reverse now waiting
[13:56] <marcoceppi> sodre: so, digging a little deeper, the hostname isn't placed in /etc/hosts but is set in the openstack's dns server (or whatever)
[13:57] <context> 68.7.219.162.in-addr.arpa domain name pointer c1210-1595.cloudatcost.com.
[13:57] <context> c1210-1595.cloudatcost.com.countrystone.com has address 198.40.237.19
[13:57] <context> YEY !
[13:57] <marcoceppi> sodre: http://paste.ubuntu.com/6341412/
[13:58] <context> where does juju/jujud store config stuff
[13:59] <sodre> marcoceppi: is novalocal at HPCloud or Havana+Neutron ?
[13:59] <marcoceppi> sodre: that's HP Cloud
[13:59] <marcoceppi> I don't have a havana neutron setup
[14:00] <sodre> okay, I think they might havana must have bug.
[14:00] <sodre> blarg... : I think havana might have a bug in that area.
[14:00] <sodre> I'll dig more in that direction.
[14:00] <marcoceppi> sodre: try just spinning up a machine outside of juju
[14:00] <marcoceppi> sodre: see if the hostname it recieves is set up properly
[14:01] <sodre> yeah, it is the same issue. A plain cirros image also gets messed up.
[14:04] <sodre> marcoceppi: I take that back. the cirrOS image is always called cirros and it ahs a 127.0.1.1. Let me try a saucy image.
[14:05] <context> ok maybe that was not the problem
[14:09] <rogpeppe> marcoceppi: i'm afraid i don't know about openstack hostname magic
[14:10] <marcoceppi> rogpeppe: any idea who i could bug
[14:11] <rogpeppe> marcoceppi: mgz or jam might be good there
[14:11] <marcoceppi> I'm secretly creating a spreadsheet of who to ping in core when something goes wrong
[14:11] <sodre> marcoceppi: it does not setup the hostname properly . Unfortunately DNSMASQ only sets up hostname in the form host-<ip address>.openstacklocal
[14:12] <sinzui> jcastro, marcoceppi do you have permission to upload the new juju installer to ubuntu.com?
[14:12] <mgz> we shouldn't be using hostnames at all with openstack
[14:12] <sodre> ohoh...
[14:12] <mgz> as it's historically been all suck
[14:13] <jcastro> sinzui, no, that needs to be done through the web team
[14:13] <jcastro> #webops I think?
[14:13] <sodre> brb
[14:13] <sinzui> jcastro, no, they don't have access.
[14:13] <sinzui> I will contact the members of ~canonical-website-editors
[14:14] <marcoceppi> sinzui: why not just upload it to launchpad?
[14:15] <sinzui> I can upload it to 1.16.2, but don't we need it on juju.ubuntu.com
[14:15]  * sinzui added it to the release because that is just the right thing to do
[14:15] <marcoceppi> sinzui: Well, we have to edit the website with the new URL
[14:16] <marcoceppi> might as well just have it point to the LP direct download link
[14:16] <marcoceppi> sinzui: we can update the docs, and then we'll have to ping the design team to update the website
[14:17] <sinzui> rock.
[14:17]  * marcoceppi updates the docs
[14:19] <marcoceppi> sinzui: where is the msi? I can't seem to find it on https://launchpad.net/juju-core/1.16/1.16.2
[14:19] <jcastro> oh
[14:19] <marcoceppi> ugh, there it is
[14:19] <jcastro> to edit the website?
[14:19] <marcoceppi> jcastro: yeah
[14:19] <jcastro> sinzui, you need to send a mail to peter mahnke
[14:19] <sinzui> marcoceppi, I just added it to https://launchpad.net/juju-core/1.16/1.16.2
[14:19] <jcastro> his team is the only one that can edit the website as of now
[14:19] <jcastro> they're working on fixing that
[14:37] <sinzui> marcoceppi, thank you for fixing the version file errors in proof
[14:38] <marcoceppi> sinzui: it was long overdue
[14:38] <marcoceppi> and it was a two line fix :\
[14:47] <context> Error: 'cloud-archive:tools' invalid
[14:47] <context> did i spell it wrong ?
[14:50] <sanman_> hello everyone
[14:50] <marcoceppi> sanman_: hello
[14:51] <marcoceppi> context: that's werid, it's what the wiki says: https://wiki.ubuntu.com/ServerTeam/CloudToolsArchive
[14:51] <sanman_> does anyone know if juju is able to deploy RPM based distros like Red Hat or CenOS? I saw some work on the old python version that made this possible, but it seems that with the new go version that it might not be possible anymore
[14:51] <marcoceppi> sanman_: not at this time
[14:52] <sanman_> ok, is that something that is planned for the future?
[14:54] <context> hmm. i did an apt-get upgrade then tried adding the repo and it worked
[14:54] <context> ignore me
[15:02] <marcoceppi> sanman_: it's on the roadmap
[15:04] <context> ERROR invalid series "trusty"
[15:04] <context> i get some weird ass errors
[15:04] <marcoceppi> context: could you explain how you got this error?
[15:05] <context> ERROR Get sftp://162.219.7.68//var/lib/juju/storage/tools/releases/juju-1.16.2-precise-amd64.tgz: unsupported protocol scheme "sftp"
[15:05] <context> marcoceppi: i re-imaged server, and did bootstrap again
[15:05] <marcoceppi> context: is it precise?
[15:05] <context> yeah
[15:39] <context> apparently IsValidSeries() just compares name to a regexp and i see no reason why it would not pass that regex
[15:43] <marcoceppi> context: I got some weird trusty errors yesterday during packaging, it might be that trusty series isn't actually open yet?
[15:44] <context> im guessing so? trusty IS in the seriesVersions map :-/ not sure why they'd put it in if it doesn't exist yet
[15:44] <context> which if anything is more weird cause bootstrap worked yesterday and now not this morning
[15:45] <marcoceppi> yeah, more oddities
[15:49] <context> although i guess yesterday i installed juju from apt-get and didn't let bootstrap install it
[16:03] <context> kk. looks like simplestreams is trying to use http to fetch sftp://
[16:04] <context> and fix already committed i guess. will have to wait for 1.17
[17:59] <jcastro> marcoceppi, have you tried quickstart yet?
[17:59] <marcoceppi> jcastro: I installed and poked at it, not actually used it yet
[18:00] <jcastro> hey, so proof is failing on my discourse bundle
[18:00] <jcastro> but doesn't tell me why
[18:01] <jcastro> bzr branch lp:~jorge/charms/bundles/discourse/bundle
[18:01] <jcastro> has the bundle
[18:01] <marcoceppi> jcastro: which proof, my proof or stor proof?
[18:02]  * marcoceppi needs to install charm-tools again
[18:02] <marcoceppi> marco@home:/tmp$ juju bundle proof discourse
[18:02] <marcoceppi> I: discourse-single-unit: No series defined
[18:02] <marcoceppi> E: discourse-single-unit: Could not find charm: discourse
[18:03] <marcoceppi> jcastro: this is the verion thing
[18:03] <jcastro> I didn't get that output at all
[18:03] <marcoceppi> rick_h_: is updating the "remote" proof to fix that
[18:03] <marcoceppi> jcastro: wat
[18:03] <jcastro> jorge@jilldactyl:~/src/bundles/discourse$ juju bundle proof bundles.yaml
[18:03] <jcastro> Traceback (most recent call last):
[18:03] <jcastro>   File "/usr/bin/charm-proof", line 9, in <module>
[18:03] <jcastro>     load_entry_point('charmtools==1.1.0', 'console_scripts', 'charm-proof')()
[18:03] <jcastro>   File "/usr/lib/python2.7/dist-packages/charmtools/proof.py", line 56, in main
[18:03] <jcastro>     print e.msg
[18:04] <jcastro> AttributeError: 'exceptions.Exception' object has no attribute 'msg'
[18:05] <marcoceppi> jcastro: dude
[18:05] <rick_h_> jcastro: pastebin the bundle, if it's something not covered I"ll add a note to add the missing thing as well
[18:05] <rick_h_> oh, isn't it juju proof --bundle
[18:05] <rick_h_> marcoceppi: ^
[18:05] <marcoceppi> rick_h_: juju bundle proof is juju charm --bundle proof
[18:05] <rick_h_> marcoceppi: ah k
[18:05] <marcoceppi> jcastro: you need to proof the bundle, not the deployer file
[18:05] <marcoceppi> jcastro: but that's another bug you're hitting
[18:06] <jcastro> marcoceppi, also, my charm tools was out of date
[18:06] <rick_h_> huh? I thought the point was to give it he deployer file
[18:06] <jcastro> also I got mixed up, charm tools wants a bundle
[18:06] <jcastro> quickstart wants a deployer file
[18:06] <marcoceppi> jcastro: you're basically proofing the metadata.yaml file and not the entire charm_dir
[18:06]  * rick_h_ is confused
[18:06] <rick_h_> marcoceppi: oh, you point it at the dir?
[18:06] <marcoceppi> rick_h_: yes. Otherwise how will it know to check for a readme?
[18:07] <rick_h_> marcoceppi: gotcha, duh
[18:07] <jcastro> yeah, but quickstart doesn't
[18:07] <jcastro> (I filed a bug)
[18:07] <marcoceppi> it expects "CHARM_DIR" in the help output
[18:07] <jcastro> and I am switching back and forth between the tools
[18:07] <jcastro> so I got mixed up
[18:07] <marcoceppi> jcastro: well, that's quickstarts problem ;) I can update it to allow proofing a yaml file
[18:07] <jcastro> yeah, I filed bugs
[18:07] <jcastro> see my post to the list
[18:07] <jcastro> has a link to the bugs I'm filing
[18:07] <marcoceppi> jcastro: ack
[18:08] <jcastro> ugh
[18:08] <jcastro> now I broke something
[18:08] <rick_h_> lol
[18:08] <jcastro> now quickstart finishes and doesn't deploy the bundles
[18:08] <jcastro> just the gui and bootstrap
[18:09] <rick_h_> and it proof's cleanly?
[18:09] <marcoceppi> rick_h_: technically, no
[18:09] <jcastro> ugh
[18:09] <jcastro> I am getting errors I didn't get yesterday on proof
[18:09] <rick_h_> jcastro: possible, there was a deploy yesterday
[18:09] <Azendale1> Could anyone more experienced with juju stuff give me some input on this bug? I'm trying to decide if it is a bug or a PEBKAC problem, and if it is a bug, where you change the defaults for the charm (in which case I could submit a patch) https://bugs.launchpad.net/charms/+bug/1245095
[18:10] <_mup_> Bug #1245095: rabbitmq-server charm ha-bindiface default breaks rabbitmq-hacluster subordinate charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1245095>
[18:10] <jcastro> http://pastebin.ubuntu.com/6342620/
[18:10] <rick_h_> jcastro: does it set config values/ There's a bug in that I've got a branch for now to fix
[18:10] <jcastro> looks like it?
[18:10] <rick_h_> jcastro: yea, that's my fault. Branch is in progress to fix
[18:10] <jcastro> ack
[18:10] <rick_h_> but deployer doesn't use proof atm so just ignore them and it's seperate from it not deploying
[18:10] <marcoceppi> rick_h_: does this also drop the version requirement for parsing the charm?
[18:11] <jcastro> rick_h_, deployer isn't any returning errors
[18:11] <marcoceppi> at least for non-promulgated charms?
[18:11] <rick_h_> marcoceppi: no, that's on proof right?
[18:11] <jcastro> deploying the bundle wordpress-simple with the following services: wordpress, mysql
[18:11] <jcastro> done!
[18:11] <marcoceppi> rick_h_: ah, thought were were talking about proof
[18:11] <jcastro> but they don't show up in the gui view nor status
[18:11] <rick_h_> marcoceppi: yea, the type checking is a bug in the remote proof bits
[18:12] <rick_h_> marcoceppi: what I get for having every failing test case covered but no passing test case :/
[18:12] <jcastro> I: discourse-single-unit: No series defined
[18:12] <jcastro> what does the I: mean?
[18:12] <marcoceppi> Informational
[18:12] <jcastro> is this just a warning?
[18:12] <jcastro> ok
[18:13] <marcoceppi> I, W, E
[18:13] <jcastro> E: discourse-single-unit: Could not find charm: discourse is weirder
[18:13] <rick_h_> jcastro: yea, that's an error then.
[18:13] <marcoceppi> jcastro: known issue, since you didn't provide a -# in the charm name
[18:13] <jcastro> I am pulling from ~marcoceppi/blah
[18:13] <rick_h_> that'll cause it to fail
[18:13] <jcastro> aha!
[18:13] <jcastro> ok trying
[18:13] <rick_h_> jcastro: paste the yaml
[18:13] <jcastro> marco got it
[18:13] <rick_h_> cool
[18:13] <jcastro> missing the -#
[18:16] <jcastro> rick_h_, would your bug stop wordpress from deploying?
[18:17] <jcastro> or is it just spamming the console?
[18:17] <rick_h_> jcastro: no, it's just going to spam proof results
[18:17] <jcastro> ok
[18:21] <jcastro> ok
[18:21] <jcastro> I can confirm I can't deploy either wordpress or discourse
[18:21] <jcastro> no errors
[18:22] <jcastro> deploying the bundle discourse-simple with the following services: discourse, postgresql
[18:22] <jcastro> done!
[18:22] <jcastro> takes me to the gui
[18:22] <jcastro> everything looks right other than discourse and postgres not deploying
[18:22] <jcastro> same with wordpress, similar non-error
[18:41] <Azendale1> where are the default settings for a charm set?
[18:42] <sarnold> in the charm source; hopefully the charm's README describes the settings that you can change. If it doesn't describe the available settings, I'd suggest filing a bug agsint the charm..
[18:42] <marcoceppi> Azendale1: they're defined in the charm under config.yaml
[18:43] <Azendale1> sarnold: It does mention the setting the the docs, it's just what it is set to by default that I would like to make a patch for. It looks like the default breaks the High Availability subordinate charm
[18:44] <jcastro> marcoceppi, rick_h_: if either you have time to try either discourse or wordpress and see if it's just me that would be <3
[18:44] <marcoceppi> jcastro: otp
[18:45] <Azendale1> marcoceppi: Where it says something like "default: <value_here>"
[18:45] <marcoceppi> Azendale1: yes, config.yaml defines the configuraiton and the default values using the default key
[18:46] <Azendale1> marcoceppi: I tried that but it didn't seem to change it. I guess I have some digging to do to make sure it deployed from my changed version. Thanks for the help!
[18:46] <marcoceppi> Azendale1: are  you doing a local deployment?
[18:47] <marcoceppi> Azendale1: https://juju.ubuntu.com/docs/charms-deploying.html#local
[18:48] <Azendale1> marcoceppi: Originally, it was from the charm store, but then I branched the BZR branch and made changes when I figured out what was breaking. I'm thinking I have some homework to do to make sure it is actually deploying from the local repository. I'll look at the link and see
[18:53] <jcastro> I bet it's caching
[18:53] <jcastro> this happened to me
[18:54] <Azendale1> jcastro: I assume I can just delete ~/.juju/charmcache and that should fix it if it is caching?
[18:55] <Azendale1> jcastro: Because I know I specifically tested the bug still happened with the bzr version, then put my change in, and the bug still happened. So I could definitely see how it would cache that, as its the same thing coming from the same location
[18:56] <jcastro> yeah I think that should be enough
[18:57] <Azendale1> jcastro: ok, thanks, I'll try that
[18:57] <jcastro> lmk what happens
[18:57] <jcastro> that'd be a good tip to document!
[18:58] <Azendale1> jcastro: Ok, will do. I probably won't have time today but will probably get it over the weekend
[18:58] <jcastro> no worries!
[18:58] <jcastro> thanks for using Juju!
[19:22] <kentb> is there a juju-gui for saucy yet (or will there be one)?
[19:28] <marcoceppi> kentb: probably not, though you can force gui to deploy to saucy
[19:29] <kentb> marcoceppi: ah, ok. I see. thanks!
[20:02] <sodre> devs: Is there a direct way to force a particular script to run on new-machines controlled by juju ?
[20:02] <marcoceppi> sodre: could you expand a bit?
[20:07] <sodre> sure.
[20:07] <sodre> Current OpenStack+Havana+Neutron does not assign proper hostnames to nodes.
[20:07] <marcoceppi> sodre: what you're asking about doing can be done with a subordinate
[20:07] <sodre> I would like to add a script in the beginning to add 127.0.1.1 <hostname> to /etc/hosts file.
[20:08] <marcoceppi> sodre: or, you can use `juju ssh` and a script too loop over all the machines and do this change
[20:08] <sodre> so... instead of just calling juju deploy hadoop
[20:08] <sodre> I would juju add-machine -n 10
[20:08] <sodre> juju ssh  change hosts
[20:09] <sodre> and juju deploy hadoop --to <machine id>
[20:09] <marcoceppi> that's really clunky. What's breaking with the hostname?
[20:09] <marcoceppi> Have you filed this as a bug in core?
[20:09] <sodre> it is not an issue with juju
[20:10] <marcoceppi> sodre: while it might not be a juju issue, it's a problem that could be solved by juju during machine provisioning
[20:10] <sodre> it is more of an issue with OpenStack+Neutron not serving DNS properly.
[20:10] <sodre> I see.
[20:10] <marcoceppi> actually, it probably wouldn't be addressed well in core
[20:10] <marcoceppi> too many things to go wrong
[20:10] <sodre> yeah.
[20:10] <marcoceppi> back to my original question, what actually breaks because of this?
[20:11] <marcoceppi> depending on what/where it breaks you might be able to get around it
[20:11] <sodre> okay: The final issue is Java handling of hostnames and the Hadoop charm dependency on hostname.
[20:11] <sodre> if you checkout the hadoop charm and look at the install hook, you'll see what I mean.
[20:12] <marcoceppi> sodre: Okay, well one way is to just fork the hadoop charm, change the install hook to add that line to /etc/hosts at the top of the hook then use juju deploy from your local copy of the charm
[20:12] <marcoceppi> not pretty, but it's far better than having to juju deploy with pre-allocated machines
[20:13] <sodre> true true...
[20:13] <sodre> let me ask another question: Is there any harm (from juju's perspective) to change the hostname of the instance after it has been brought up ?
[20:15] <marcoceppi> sodre: nope, shouldn't be
[20:15] <sodre> okay.  let me try that route then.
[20:17] <sodre> marcoceppi: do you anything about Canonical's private cloud installation ($9000 option),  e.g.  what do they actually install and deliver ?
[20:26] <jcastro> I don't know much about it
[20:26] <jcastro> you mean the Kickstart?
[20:26] <jcastro> I didn't know we did those anymore
[20:26] <sodre> yes.
[20:27] <sodre> http://www.ubuntu.com/cloud/tools/jumpstart
[20:28] <jcastro> so you mean other than what's listed in the "what the engineer will do?" part?
[20:28] <sodre> :)
[20:29] <jcastro> I don't really know, but if you have specific questions you can jet me an email and I can get you a response asap from someone who knows
[20:29] <sodre> it much better explained than in the past
[20:30] <sodre> i got most of the answers now.
[20:30] <jcastro> oh ok, whew. :)
[20:46] <jcastro> bcsaller, here's that bundle I couldn't get launched: https://code.launchpad.net/~jorge/charms/bundles/wordpress/bundle
[20:46] <jcastro> bcsaller, the PPA with the quickstart is mentioned on the list by Gary
[20:47] <jcastro> basically it fires off the bootstrap, the gui all fine, then says "deploying wordpress and mysql" and then returns a prompt
[20:47] <jcastro> but the services themselves aren't launched
[20:47] <bcsaller> jcastro: I'll take a look
[21:33] <bcsaller> jcastro: I see the same thing as you on ec2
[21:33] <jcastro> whew!
[21:36] <bcsaller> digging into the logs a little bit, so far I don't see deployer being triggered in a way I'd expect
[23:02] <bjf> what does this mean?: ERROR cannot start bootstrap instance: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT
[23:03] <bjf> and if i run "juju bootstrap" again it says "ERROR environgment is already bootstrapped"
[23:15] <bjf> juju status says: ERROR Unable to connect to environment "".
[23:15] <bjf> so which is it? do i have an environment or not and how do i recover from wherever i am