[07:50] <parlos> Good Morning, I've got a question wrt. Landscape (standalone) and MAAS. My aim is to use autopilot to deploy OpenStack. In my initial MAAS node commisoned nodes, i only had single nics. Landscape/Autopilot complained, so I hooked up one more network, recommissioned that node. However, Landscape/Autopilot did not detect the change. So I then removed the node, and started it from scratch, and commissioned it.. MAAS detected the new network automatically, b
[09:22] <gaurangt-> hi, is it mandatory to specify the network spaces while deploying the applications into LXD?
[13:42] <orf__> has anyone here actually ever successfully deployed Juju to a vsphere host?
[13:43] <orf__> it apparently needs a direct connection to the vsphere host, as well as the API
[13:43] <orf__> something which isn't documented anywhere.
[13:59] <rick_h> gaurangt-: basically if you use spaces somewhere in the model then you have to do it everywhere to make sure it's clear. If there's no spaces in the model then it should just work sans spaces.
[14:00] <rick_h> orf__: I've not, but some folks have as they've tested the documentation and stokachu had some updates about conjure-up working better with vsphere recently
[14:00] <rick_h> orf__: http://blog.astokes.org/conjure-up-dev-summary-aws-cloud-native-integration-and-vsphere-3/
[14:00] <stokachu> orf__: yea juju needs to actually talk to the api
[14:00] <stokachu> orf__: im not sure how else it would work
[14:01] <stokachu> as for the host access im not entirely sure on that
[14:12] <rahworkx> Hello all, Is there a way to search all controllers/models for a aws instance-id?
[14:13] <gaurangt-> rick_h, thanks.. that's what I have observed too.
[14:13] <orf__> stokachu: sure, but it tries to contact the vsphere *host*
[14:13] <orf__> which is firewalled off, as it should be
[14:13] <orf__> `juju.cmd.juju.commands bootstrap.go:492 failed to bootstrap model: cannot start bootstrap instance: failed to create instance in any availability zone: uploading ubuntu-xenial-16.04-cloudimg.vmdk to https://10.32.252.51/nfc/52774700-37f1-4a46-cc1f-de20c50f94e5/disk-0.vmdk: Post https://10.32.252.51/nfc/52774700-37f1-4a46-cc1f-de20c50f94e5/disk-0.vmdk: Service Unavailable`
[14:13] <orf__> that IP is the host, the API is accessible
[14:13] <orf__> our vsphere guy says it should upload it to the datastore, then create a VM from that vmdk in the datastore
[14:14] <orf__> it shouldn't be uploading anything to 10.32.252.51 as far as I can tell
[14:14] <stokachu> orf__: ok, sec
[14:14] <orf__> thanks for the link rick_h :)
[14:16] <stokachu> orf__: can you add your input to https://bugs.launchpad.net/juju/+bug/1711019
[14:16] <mup> Bug #1711019: vsphere: cache VMDKs in datastore to avoid repeated downloads <juju:Triaged> <https://launchpad.net/bugs/1711019>
[14:16] <stokachu> it's about repeated downloads but also applies to your issue
[14:17] <stokachu> orf__: ill make sure it gets on the radar
[14:19] <orf__> thank you :)
[14:19] <stokachu> orf__: anytime, sorry about the hiccup
[14:34] <orf__> done, no problem stokachu :)
[14:34] <stokachu> orf__: awesome ty!
[14:34] <orf__> I've been shaving yaks with this setup. Going to see if conjur-up dev channel is better
[14:34] <stokachu> yea edge is much better
[15:43] <stormmore> morning juju world o/
[15:44] <rick_h> morning stormmore
[16:38] <Dwellr> still playing with juju kubernetes-core / canonical-kubernetes .. I can see that once I bring up the world, and deploy microbot as per https://jujucharms.com/kubernetes-core/ that I _can_ reach my service if I access it via the kubernetes-worker/0 machine ip.. but that machine ip is 10.102.82.* and not reachable via my machines adapter address of 10.0.2.15, nor via it's other adapter address of 192.168.1.* .. I feel I'm missing
[16:38] <Dwellr> something obvious..
[16:38] <Dwellr> like in the example url, when it does kubectl get ingress, it has a reply come back with 172.31.26.109 as an address, where as when I do the same, that field is blank.
[16:53] <Dwellr> hmm.. looks like this might be relevant https://github.com/kubernetes/kubernetes/issues/49614
[17:10] <tvansteenburgh> Dwellr: interesting, are you gonna try that fix?
[17:10] <tvansteenburgh> maybe our ingress controller needs to be updated
[17:11] <Dwellr> I tried deploying the rbac ingress, but it wouldn't let me create the roles..
[17:11] <Dwellr> Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml": roles.rbac.authorization.k8s.io "nginx-ingress-role" is forbidden: attempt to grant extra privileges: [.... long list of privileges ... ]
[17:11] <tvansteenburgh> Dwellr: yeah, rbac is not on by default
[17:13] <Dwellr> well I'm just looking for the simplest way to make this work.. should I figure out how to enable rbac? or figure out how to run a newer ingress that isn't rbac ?
[17:15] <tvansteenburgh> Dwellr: we have a test bundle with rbac enabled by default if you want to try that
[17:15] <Dwellr> sure.. how ? =)
[17:16] <Dwellr> (do I need to start fresh? I'm in a virtualbox pc, so pretty each to spin up a new one..  or is this something I can magically switch to from a non-rbac enabled conjure-up kubernetes-core install)
[17:17] <tvansteenburgh> you'd need to redeploy. this is something we're working on but isn't released yet
[17:18] <tvansteenburgh> or you could try updating to a newer ingress that's not rbac enabled
[17:18] <tvansteenburgh> if there is one
[17:18] <Dwellr> lets try that first =)
[17:19] <Dwellr> of course, I already blew away my ingress-controller replication controller thing.. else mebbe I could have just altered that ;p
[17:24] <Dwellr> yeah.. found this too.. https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/279
[17:27] <tvansteenburgh> Dwellr: good find, i'd like to know if that actually fixes your problem
[17:34] <Dwellr> hmm.. well.. I'm running the new one, and I can still get to the service via it's 10.102.82.39 address, but not via my 192.168.1.* or via 127.0.0.1 from the host etc
[17:47] <Dwellr> makes little sense to me.. dont understand how other ppl are routing any traffic into their conjured up kubes.. since they seem to live on their own network range, disconnected from the connectivity of the host
[17:55] <Dwellr> hmm... lxc network attach interface-name kubernetes
[17:55] <Dwellr> (from a comment on https://stgraber.org/2017/01/13/kubernetes-inside-lxd/)
[17:56] <Dwellr> although lxc doesn't seem to have a network arg
[17:57] <Dwellr> oookie.. I'm on lxc 2.0.10
[17:57] <Dwellr> sounds like 2.3 changes a lotta stuff
[18:00] <stokachu> Dwellr: yea cli arguments changed/updated
[18:00] <Dwellr> I used conjure up to deploy to lxd ..
[18:00] <Dwellr> probly explains why my `sudo lxc list` comes back empty when running kube inside lxd ?
[18:00] <stokachu> nah we bundled lxd with conjure-up
[18:00] <stokachu> conjure-up.lxc list
[18:00] <Dwellr> oooh.. now there's an idea
[18:00] <stokachu> which is changing in the next release
[18:00] <stokachu> b/c bundling lxd didnt help us like we thought
[18:00] <Dwellr> and that gives me version 2.14
[18:00] <stokachu> yea
[18:00] <stokachu> that'll have the network commands
[18:00] <Dwellr> and ... I can see the worker node is connected to my eth0 when I need it connected to eth1
[18:00] <Dwellr> this might be what I'm looking for =)
[18:00] <Dwellr> actually scratch that
[18:01] <Dwellr> eth0 is the lxd's eth0 not mine =)
[18:03] <Dwellr> so the worker node has docker0, eth0, cni0, and flannel.1 network interfaces.. and the eth0 has the address that I have to use at the mo to access the worker with the ingress on it..
[18:08] <Dwellr> is the conjureup networking documented somewhere so I can figure out what it's trying to do ?
[18:09] <Dwellr> eg, if I do `conjure-up.lxc network list` I can see it built 2 bridge interfaces.. etc..
[18:09] <Dwellr> not too sure why
[18:11] <stokachu> Dwellr: unfortunately, no, the reason for the additional bridge was for openstack due to neturon needing an additional network
[18:11] <stokachu> Dwellr: this has all been fixed, and i'm prepping a candidate now which you probably should use
[18:11] <Dwellr> hehe =) just shout when it's good to go =)
[18:11] <Dwellr> although I'm still learning a load by digging around
[18:11] <stokachu> Dwellr: thanks, it's building now shouldnt be to much longer
[18:12] <stokachu> Dwellr: lxd will be the snap lxd which is version 2.17
[18:13] <Dwellr> like it's great to have seen the lxc list =) .. I tried adding my physical adapter to the worker container via   conjure-up.lxc network attach enp0s8 juju-d81eff-1 eth1  .. which returned ok, but  conjure-up.lxc list  doesn't show it
[18:13] <stokachu> what about conjure-up.lxc info juju-d81eff-1
[18:14] <Dwellr> does not list an eth1
[18:14] <Dwellr> and no address in the Ips: section matches the current ip for enp0s8
[18:15] <stokachu> hmm
[18:15] <stokachu> you can edit the profile which should match the model
[18:15] <stokachu> so `juju models`
[18:15] <stokachu> the conjure-up.lxc profile list
[18:16] <stokachu> but thats for all containers using that profile
[18:16] <stokachu> not sure why the network attach on the single container didnt update itself with it
[18:17] <Dwellr> I've not messed with lxc/lxd before =) only docker/virtualbox/vagrant/etc
[18:17] <Dwellr> so this is all kinda interesting.. more tools to figure out
[18:17] <stokachu> cool, https://discuss.linuxcontainers.org/ is a great forum to visit
[18:17] <stokachu> for more help
[18:17] <Dwellr> aye, tho then they kinda want me to understand what the current stuff is trying to do ;p which I'm still figuring out
[18:18] <stokachu> :)
[18:20] <Dwellr> interesting.. ok.. I think mebbe adding it to a profile might work, can I change the profile for a running container? hmm.. think I can..
[18:21] <Dwellr> let me try lxc profile copy to clone the current one used by the worker, then assign the worker to the clone
[18:22] <stokachu> yea you can change it for running container
[18:22] <stokachu> it'll update it
[18:25] <Dwellr> well.. the profile switcharoo worked, but the container still has no eth1 .. even if I exec into it and check with ifconfig
[18:26] <Dwellr> mebbe the container needs to restart?
[18:27]  * Dwellr hits the container with the lxc restart hammer.
[18:28] <Dwellr> thing is, if I ask lxc network list .. it says the enp0s8 device is used by 1 container
[18:29] <Dwellr> and if I do lxc network show enp0s8, I can see it's in use by the worker container
[18:33] <ybaumy> god i love vmware support. they recommand to use vsphere client 6.0 u3 for resizing a lun on vsphere 6.5. that went well. we just lost 13TB of data
[18:33] <ybaumy> im so happy right now i could die
[18:34] <Dwellr> 13tb.. ouch
[18:34] <Dwellr> you has backups.. right ?
[18:35] <ybaumy> we have backups but they are from last night. and its a sql server where the customer migratates big data into it the whole day .. so basically we lost a whole day
[18:36] <ybaumy> the good thing is the log backups didnt work
[18:36] <ybaumy> :D
[18:36] <ybaumy> and nobody cared
[18:37] <ybaumy> im not vmware team just storage and linux/unix. so its not my business to check
[18:38] <ybaumy> so customer looses a day + restore time
[18:42] <ybaumy> thank god im already at home and there is beer
[18:44] <Dwellr> stokachu: ahh.. mebbe I can't add a physical device directly to a profile .. mebbe it has to be a bridge..
[18:44] <stokachu> ah
[18:44] <stokachu> yea
[18:45] <xarses> Hi, I'm having problems getting a bootstrap done to a private openstack cloud, I've generated the image meta-data, and either locally, or http hosted, it fails for "index file has no data for cloud"
[18:46] <Dwellr> this is gonna my my head hurt =) I've got enp0s8 on this system that's a physical interface as far as it knows, but is actually a bridge to my real lan (because I'm in virtualbox, with the network set to bridged) .. so I now need to get that interface into my worker container so I can open ports on it..
[18:47] <tvansteenburgh> Dwellr: https://www.youtube.com/watch?v=3f57PovdY44
[18:47] <Dwellr> ta =)
[18:50] <Dwellr> aha.. type:nic ... supports nictype:physical
[19:13] <Dwellr> and this is why I play in vagrant.. ended up somehow messing up my network so that lxc thought my physical adapter (that's actually my bridge to my lan via virtualbox) was now actually a bridge, which somehow caused it to move the real adapter to be eth1, which then conflicted with other stuff in lxc, and eventually it wouldnt let me delete that network because it was 'in use'.. yay..
[19:13] <Dwellr> vagrant destroy && vagrant up =)
[19:49] <Dwellr> ooh.. I found this.. =) https://github.com/evanhempel/lxc-portforward
[19:59] <magicaltrout> hello folks
[19:59] <magicaltrout> i have another CDK question I'm trying to answer before it gets asked again since the first time we tested CDK
[20:00] <magicaltrout> "I was wondering if it is possible to support OpenStack Cinder and NFS StorageClass for testing for now." does that mean anything to anyone?! ;)
[20:01] <tvansteenburgh> magicaltrout: sure, cdk supports everything that upstream does
[20:01] <magicaltrout> ah yeah that "its the same as upstream" sales pitch ;)
[20:01] <magicaltrout> okay
[20:01] <tvansteenburgh> magicaltrout: are you asking for how to do it?
[20:02] <magicaltrout> hehe, no just getting an answer
[20:02] <magicaltrout> i can fiddle around to figure it out
[20:10] <xarses> any around that can help with getting bootstrap going on openstack?
[20:15] <rick_h> hml: have a few min to help out xarses ? or beisner is someone around that might know the process a bit better?
[20:15] <hml> sure
[20:15] <hml> xaras: how can I help?
[20:16] <hml> xarses ^^
[20:17] <xarses> trying to get going. generated metadata, either passed as `--config image-metadata-url` and a webserver, or via `--metadata-source /path/to/local`  I always get "skipping index ... because of missing information: index file has no data for cloud"
[20:18] <hml> xarsas: that shoulds like the path provided isn’t enough for juju to find it.  if you do the bootstrap with —debug, the path juju is searching at will be shown -
[20:18] <hml> xarasa: you can then change the part of the path you’re providing to
[20:19] <xarses> it find the index when i have the stream data hosted on the webserver, and implies the same over file
[20:19] <xarses> it just refuses to find my cloud name in the index
[20:20] <xarses> the generated data doesn't explicity have a cloud name in it
[20:20] <xarses> I'm guessing its looking for some pattern match, but no clue what pattern its looking for
[20:21] <hml> xarsas: can you provide a pastebin of the bootstrap output please?
[20:21] <xarses> I'd have to redact a bit of data, but sure
[20:25] <hml> xarsas: that should be okay
[20:29] <ybaumy> great we are restoring 13Tb with less then 3Gbit bandwidth..life is good
[20:31] <xarses> hml: https://gist.github.com/xarses/307a07d290fcc9f48008b3ae1d192f05
[20:36] <kwmonroe> hahahaha... i know what rick_h did:  https://github.com/juju/charmstore-client/issues/143
[20:36] <hml> xarses: juju is looking for the openstack endpoint and region provided with the openstack cloud config within the index.json… and can’t find it.
[20:37] <rick_h> kwmonroe: :)
[20:37] <rick_h> kwmonroe: 3 times now...
[20:37] <magicaltrout> i've done that a bunch of times :'(
[20:38] <magicaltrout> its the saddest thing ever
[20:38] <hml> xarses: the path to the index.json file listed is correct yes?  there are some files not found messages above
[20:38] <xarses> ya, one is found
[20:38] <kwmonroe> so, fwiw rick_h, if you would "charm proof" before you "charm push", you'd see some bizaro (albeit informational) output.  that would tell ya not to push :)
[20:39] <xarses> hml: ya, that's exactly what I suspect, however the directions for generating the metadata don't have any context for providing the cloud only the region is reflected in the index.json file
[20:39] <rick_h> kwmonroe: but I'm happy. my interface updates work, charm is working, woot woot
[20:39] <rick_h> just have to find a path through code review now he
[20:39] <rick_h> heh
[20:40] <hml> xarses: the cloud is defined by the endpoint in the metadata
[20:40] <xarses> well, then the endpoints match
[20:41] <hml> xarses: i’m thinking the error messages aren’t good.
[20:41] <hml> xarses: does this file exisit:  http://somelocalhost:8000/images/streams/v1/index.json
[20:42] <hml> at that exact location?
[20:44] <xarses> hml: https://gist.github.com/xarses/307a07d290fcc9f48008b3ae1d192f05#file-gistfile2-txt
[20:45]  * hml lookin
[20:45] <magicaltrout> xarses reminds me of xerces which makes me real sad because those Java libraries are a right PITA......
[20:45] <xarses> java is a right PITA....
[20:45] <xarses> =)
[20:46] <magicaltrout> as a java developer, i am okay with it, some old shit is the worst though :)
[20:47] <magicaltrout> of course the other pun with that nick is you could say Java is a right Pain In The xarses .......
[20:47] <magicaltrout> its been a long day
[20:48] <xarses> hml, I also just posted the metadata generate-image cmd and output
[20:48] <kwmonroe> well, it'd have to be "Pain In The xArses" because that's how acronyms work magicaltrout.
[20:50] <xarses> I've partly followed https://jujucharms.com/docs/stable/howto-privatecloud, I haven't done any of the switf nonsense since I dont have an object store, I'm just using python -m SimpleHTTPServer on the folder
[20:50] <hml> xarses: found the updates - trying to find what’s going on here… not jumping out at me
[20:50] <xarses> I guess I should add this random endpoint that they added to the catalog though
[20:51] <hml> xarses: the endpoint added for product-streams assumes that swift etc is used
[20:51] <xarses> its a http get source at that point, adding it shouldn't matter
[20:52] <xarses> but ya, thats what I initially thought
[20:52] <xarses> but this output is useless for triaging this issue
[20:53] <xarses> I was hoping that ya'll would have a better idea of what's up
[20:53] <hml> xarses: the usual problem is when the front piece of the path for the metadata doesn’t match what juju is expecting and it can’t find the file
[20:54] <hml> xarses: i’m concerned about the file not found messages in the output
[20:55] <xarses> well, generate-image didn't make any of those
[20:55] <xarses> should I change the cloudname from custom?
[20:56] <hml> xarses: no - mine says the same
[21:01] <xarses> uh, I jut regenerated it a bunch more times with out the endpoint. it looks like I may have had a problem with the region name I passed to generate-image
[21:02] <xarses> urgh, yep looked back in the data I redacted, the region name was slightly transposed
[21:03] <hml> xarses:  what would do it.
[21:03]  * xarses with no hair left to pull out, pulls out random stubble 
[21:04] <xarses> ok, so now it doesn't respect the zone I passed
[21:04] <xarses> so how do I control the availability zone passed?
[21:04] <hml> xarses: yes, openstack is the hardest to bootstrap
[21:05] <xarses> lol, looks like it went through every az and finally used the one that worked with the network I passed
[21:05] <hml> xarses: yes, it will do that - though there are some bugs there…
[21:05] <xarses> although its still not the az I wanted
[21:05] <xarses> zone appears to be valid in the models
[21:06] <xarses> is there an option that bootstrap will take?
[21:06] <hml> xarses: if the network AZ name doesn’t match the AZ for the compute nodes…  so you might have gotten luckily
[21:06] <hml> xarses: looking for the option
[21:06] <xarses> no, we don't have a version of openstack that has a working version of both
[21:06] <xarses> network az don't really do anything useful in mitaka
[21:07] <xarses> and we have routed provider networks, but the code that make provisioning work with out forcing both network and az is only present in oakta
[21:08] <xarses> tever, if the instance will come up then I can image it and re-launch it where I need
[21:09] <xarses> hmm, it seems to be waiting on "sudo: unable to resolve host juju-e290f0-controller-0"
[21:12] <hml> xarses: not sure i’ve seen that one?
[21:12] <hml> xarses: sometimes the connection take a bit though
[21:12] <xarses> we don't have a dns service
[21:12] <xarses> it looks like it set up a new security group
[21:12] <xarses> that doesn't accept icmp
[21:12] <hml> xarses: that should be fine… i’m not running it either
[21:13] <hml> xarses: yes it does setup a new sec group
[21:14] <xarses> ah, yep doesn't accept icmp
[21:14] <xarses> but does accept 22
[21:14] <xarses> of course it sent the wrong key by default, but network is good
[21:14] <xarses> its just sitting here doing nothing then
[21:15] <xarses> just before it tried to login to the ip, then went to fetch agent tools
[21:16] <xarses> then this sudo unable to resolve
[21:16] <xarses> hmm
[21:16] <xarses> its logged into the thing
[21:21] <xarses> hmm
[21:21] <hml> xarses: juju bootstrap --to zone=nova - to specify the AZ
[21:21] <xarses> hml, oh nice thanks
[21:21] <xarses> it looks like its built the instance ok, I've logged into it
[21:22] <xarses> however its stuck downloading https://streams.canonical.com/juju/tools/agent/2.2.2/juju-2.2.2-ubuntu-amd64.tgz
[21:22] <hml> xarses: so that’s the intance for the controller
[21:22] <xarses> I was able to wget it and it only took like a 30sec
[21:22] <xarses> ya, I'm snooping the ps tree on the contoller
[21:23] <hml> xarses: new toy?  :-)
[21:25] <xarses> 2^19 pieces. Assembly required. For ages 9+. CAUTION: Contains complex parts may cause brain hemorrhaging and lack of cognitive reasoning
[21:28] <xarses> its still stuck here ...
[21:29] <xarses> not sure what to do
[21:30] <hml> xarses: hrm…
[21:32] <xarses> ahh, figured out the sudo message
[21:32] <hml> xarses: that one i’m not sure on… the bootstrap does have a timeout on it.  it doesn’t ctrl-c well.
[21:32] <xarses> its just a stderr message because the hostname isn't resolvable, otherwise its happy
[21:34] <xarses> strace of the curl command that stuck pulling its socket
[21:35] <hml> xarses:  did you bootstrap with use-floating-ips?
[21:35] <xarses> nope
[21:35] <hml> xarses: can the instance get to the outside word
[21:35] <hml> world
[21:35] <xarses> yea
[21:35] <xarses> I was able to download the file fine with wget on the controller
[21:37] <hml> wallyworld: have you seen where bootstrap gets stuck downloading the tools to the new controller instance…. but you can download them fine by hand to that instance?
[21:37] <xarses> its downloading the file very slowly with this curl command
[21:37] <xarses> but then it like gets stuck
[21:38] <wallyworld> i haven't seen that, i've seen where the bootstrap instance is firewalled and can't download at all
[21:40] <xarses> well neat
[21:40] <xarses> curl is broken
[21:41] <xarses> 0 20.8M    0 32768    0     0    633      0  9:36:08  0:00:51  9:35:17  2896
[21:41] <xarses> 0 20.8M    0 32768    0     0    498      0 12:12:18  0:01:05 12:11:13     0
[21:50] <xarses> uhg, something on the network here must be blocking it
[21:50] <xarses> I can't fetch the file at all now
[21:50]  * xarses continues to bang head against desk
[21:57] <hml> xarses: can the instance get things from a local box?  you can provide both images and tools with the metadata flag  - though i haven’t tried the tools part.
[21:58] <xarses> I was looking though bugs that implied that both can't be passed as args
[21:59] <xarses> its supposed to be able to get things, but my box running the command can't fetch the file currently either
[21:59] <hml> xares: if you have the images and tools in the same directory structure - it would work.
[21:59] <xarses> can I generate the metadata for this too? I can get the file from much futher parts in the network
[22:00]  * xarses tries to get off this marry-go-round
[22:01] <hml> hml: i think so… looking for how it works.
[22:01] <hml> xarses: ^^^ I can’t always type :-)
[22:06] <hml> xarses: i just had to put the tools in a specific directory relative to where i put the images… will gather a pastebin for you -
[22:06] <xarses> thx
[22:12] <hml> xarses: https://paste.ubuntu.com/25441295/
[22:12] <hml> xarses: i’m not sure what will happen if you try the images and tools in different locations on the cli
[22:12] <hml> xarses: i do have a the product-streams service configured too
[22:13] <hml> xarses: i downloaded the juju-2.2.2-ubuntu-amd64.tgz from streams.canonical.com - just get the on which matches you’re version of juju and the machine type
[22:14] <xarses> ya, 2.2.2
[22:14] <xarses> I have the url that the controller is trying to use
[22:15] <hml> xarses: that’s what i used
[22:40] <xarses> sigh, it finally died trying on gui
[22:42] <xarses> and on the re-run, its just sitting around waiting for connect
[22:43] <xarses>  DEBUG juju.provider.common bootstrap.go:497 connection attempt for ... failed: ssh: connect to host ... port 22: Connection refused
[22:43] <xarses> repeated several times, don't have the tools copy set up yet
[23:01] <xarses> yay, slowly getting further every time