[02:32] <magicaltrout> lazyPower: https://gist.github.com/buggtb/6cfa67a2ae9c0c97321cae86ca73da6c whats the current work around for that?
[02:36] <lazyPower> magicaltrout: that looks remis of an old version of the api-lb
[02:36] <magicaltrout> yeah i updated it
[02:36] <magicaltrout> that has indeed gone
[02:37] <magicaltrout> instead i get Error from server: error dialing backend: dial tcp: lookup k8s-workers-1 on 10.105.255.250:53: no such host
[02:38] <lazyPower> that looks like the dns pod is dead
[02:38] <lazyPower> this is a fun suite of errors
[02:39] <lazyPower> magicaltrout: are you getting both errors, or one successively after the other? or has the error changed?
[02:39] <magicaltrout> it changed now to just the above
[02:39] <lazyPower> magicaltrout: kubectl get svc --all-namespaces  --- is that the VIP of your DNS service?
[02:49] <magicaltrout> my own in house dns lazyPower ?
[02:49] <lazyPower> magicaltrout: kubernetes runs a kubedns pod in the kube-system namespace that provides cluster level dns
[02:49] <magicaltrout> ah
[02:49] <lazyPower> port 53 is the dns port, i presume it was querying kube-dns, or it may be trying to resolve the hostname for k8s-workers-1. Depends entirely on where the error is being emitted from
[02:52] <magicaltrout> i dunno where that ip address subnet comes from lazyPower
[02:52] <magicaltrout> its on openstack and some public ip's are in 10.104.1.0/24
[02:53] <magicaltrout> so I guess there is probably some 10.105 ip addresses as well
[02:53] <lazyPower> magicaltrout: thats not promising... there is a VIP range declared by kubernetes when you deploy it as cluster-cidr. it defaults to a /16
[02:53] <lazyPower> its quite possible that address range is what was picked for the vip of those services.
[06:50] <fengxia41103> Hi Juju, I need to create a CentOS node. What's the quickest way?
[06:51] <fengxia41103> I have tried MAAS and using add-machine to add a CentOS, failed at "no tools found"
[07:20] <kjackal> Good morning Juju world!
[08:39] <jujulearn> anyone with openstack-base experience?
[08:39] <erik_lonroth_> Hmm. When I'm running "juju status", the command is not returning but gets stuck in a blocking state. I've recently changed the IP address of the server so I guess this might have something to do with it? How can I debug this?
[08:40] <jujulearn> How long was the wait
[08:41] <erik_lonroth_> its still blocking
[08:41] <erik_lonroth_> about 3-4 minutes how
[08:41] <erik_lonroth_> now*
[08:43] <jujulearn> are u able to see some output with juju show-machine 0
[08:43] <erik_lonroth_> I think I'll reboot. I've changed the network so I think perhaps lxc/lxd even has something to do with it.
[08:44] <jujulearn> the above command will tell you if you lxd have any issues.
[08:44] <erik_lonroth_> OK, I'll try once the machine comes back up in a few
[08:47] <erik_lonroth_> https://pastebin.com/Cv20bZRN
[08:47] <erik_lonroth_> It seems its still messed up
[08:47] <erik_lonroth_> Oh, no
[08:47] <erik_lonroth_> Now it started to work again. I think I might have been to fast
[08:48] <jujulearn> good
[08:48] <jujulearn> are u using openstack-base?
[08:49] <erik_lonroth_> No I don't think so. I'm not sure what it is.
[08:51] <erik_lonroth_> Is there any way for juju to update the ipaddresses listed in the "juju status" command. I've changed the ipv6 address scheme and even though the machines has picked up the new IPv6, "juju status" still shows the wrong ipv6 address.
[08:53] <jujulearn> whats the output of juju subnets
[08:57] <erik_lonroth_> "No subnets to display"
[09:25] <erik_lonroth_> I filed a bug for this: https://bugs.launchpad.net/juju/+bug/1691977
[09:25] <mup> Bug #1691977: ipv6 addresses not updated after changed ipv6 subnet <juju:New> <https://launchpad.net/bugs/1691977>
[09:36] <dakj__> jamespage: hi james....have a look here https://paste.ubuntu.com/24603858/
[09:37] <jamespage> dakj__: well that looks much better
[09:38] <jamespage> what did you change?
[09:56] <dakj__> jamespage: my desire now?  It's to "kill" a my colleague :-), when I saw he wrong to make the port channel between IBM host (Esx) and the Cisco switch my face is became red, fortunately it's a lab and not in production... so anyway now I want to remake also the deploy of landscape. But why the ceph-osd/21, /22 and 23 are in blocked....
[09:57] <jamespage> dakj__: osd-devices configuration is incorrect
[09:57] <jamespage> dakj__: what's the block device name for the second disk in each of those servers?
[09:57] <dakj__> jamespage: just a second check that
[10:00] <dakj__> jamespage: https://paste.ubuntu.com/24603952/ while on juju guy it's /dev/vdb
[10:00] <jamespage> dakj__: juju config ceph-osd osd-devices=/dev/sdb
[10:02] <dakj__> jamespage: https://pasteboard.co/86NyS1g5U.png
[10:03] <jamespage> well change that to /dev/sdb
[10:03] <jamespage> cli or gui does the same thing
[10:12] <dakj__>  jamespage: do I must to remake all deploy or it's only necessary to change that via gui and save, and it goes in unlock automatically?
[10:13] <jamespage> dakj__: it should take effect without redeployment
[10:13] <jamespage> 'config-changed' hook will fire, detect the configured disk on each unit and bootstrap it into the ceph cluster
[10:13] <dakj__> jamespage: I try that immediately
[10:16] <dakj__> jamespage: https://paste.ubuntu.com/24604142/
[10:17] <jamespage> dakj__: so it sounds like my hypothesis "this smells like a network problem"  was right?
[10:18] <jamespage> dakj__: btw I'd highly recommend you move to using juju 2.1.2
[10:18] <dakj__> jamespage: I'm so happy :-) :-) thanks a lot for your support next steps landscape, and install ubuntu with lxd on IBM system x3650 M4
[10:18] <jamespage> dakj__: can I ask a favour in return - can you put together some basic docs on how you deployed the bundle on vSphere?
[10:19] <dakj__> jamespage: you had right.....I could believe when I see the port channel wronged
[10:19] <jamespage> could be a google doc or a github gist
[10:19] <dakj__> jamespage: sure, explain how do I make that
[10:19] <jamespage> +10
[10:19] <jamespage> please
[10:20] <jamespage> dakj__: I'd also be interested to see if you can actually start instances with kvm nested inside esx
[10:20]  * jamespage does not currently have access to a vsphere cluster
[10:27] <dakj__> jamespage:  do you know the credentials to make the login in openstack?
[10:28] <jamespage> dakj__: all in the README for the bundle
[10:32] <dakj__> jamespage: are you sure? because on README I don't find any about that
[10:36] <jamespage> dakj__: ah its in the novarc
[10:36] <jamespage> https://api.jujucharms.com/charmstore/v5/openstack-base/archive/novarc
[10:36] <jamespage> dakj__: admin/openstack
[11:04] <dakj__> jamespage: https://pasteboard.co/87R6JDR7O.png
[11:05] <jamespage> dakj__: I'm guessing you did not change the admin-password option in the bundle?
[11:35] <dakj__> jamespage: no I didn't it :-( Where can I see that? And where do I must make it....
[11:37] <jamespage> dakj__: no that username and password shoudl work ok - can you check things from the CLI please? The README has lots on cli usage
[11:44] <dakj__> Jamespage: is it fine this link https://docs.openstack.org/admin-guide/cli-manage-projects-users-and-roles.html?
[11:46] <dakj__> jamespage: if I change the value in admin-password in Keystone could it work?
[13:04] <dakj__> jamespage: I'm remaking the deploy of that, the password has to set in keystone? Because there is already present "open stack" https://pasteboard.co/89TVK2USz.png
[13:04] <jamespage> dakj__: yes that's what I'm saying - the default username and password is admin/openstack
[13:05] <jamespage> dakj__: I don't know why that's not working in the dashboard - if you look at the README in the bundle, it shows you some basic cli usage for openstack - please check that first
[13:05] <dakj__> jamespage: last time after the deploy it was None....
[13:06] <jamespage> what was none?
[13:06] <dakj__> jamespage: sorry, the value in Admin-password was "None"
[13:07] <jamespage> not sure why - that value comes directly from the bundle you used
[13:08] <dakj__> jamespage: now I've changed that and run the deploy again, after that I'll check it.
[13:09] <dakj__> Jamespage: waiting that, the issue with HAproxy is not changed (https://askubuntu.com/questions/906763/haproxy-reverseproxy-relation-changed-for-landscape-serverwebsite) do you know something about that?
[13:10] <jamespage> dakj__: sorry - I'd have to defer to someone who knows the landscape charms
[13:20] <dakj__> jamespage: ok, I've to thanks you for all your time dedicated to me and my lab :-).  At moment this issue can wait!!!!!!
[13:25] <erik_lonroth_> Anyway I can remove a model where I still have machines left in the list which are no longer within lxc/lxd ? I removed then forcefully to start over with the model, but now I can get rid of the model at all. Tried: "juju remove-machine 1 --force" without luck...
[13:26] <erik_lonroth_> I did "lxc delete <machine>" previously, hence the machines are no longer available.
[13:26] <erik_lonroth_> ... but I want to get rid of the model from juju
[13:31] <rick_h> erik_lonroth_: hmm, not sure about that one. We have a way to kill the controllers but just a model that's been tampered with behind the scenes I'm not sure.
[13:31] <rick_h> erik_lonroth_: migth have to hit the juju list and see what the devs think (though they're mostly out for the weekend at this point)
[13:32] <erik_lonroth_> ok thanx
[13:44] <dakj__> jamespage: now it's prefect. We need to change that before to make the deploy https://pasteboard.co/8aAkNISbV.png
[13:46] <jamespage> dakj__: correct - you can't change it afterwards (at the moment at least)
[13:48] <jujulearn_> Any compatibility issues maas 2.1.5 and juju-2.1.2 ?
[13:54] <dakj__> jujulearn: I'm using that, MAAS version 2.1.5 and JUJU version 2.1.2-xenial-amd64 and at moment any issue...
[13:57] <dakj__> jamespage: ok let me organise about that documentation and I'll give you that soon. Now I can only say thanks a lot for your support. Next steps are to solve the issue with landscape and replace esx with ubuntu lxd on that IBM host.
[13:59] <dakj__> jamespage: let's keep in touch here
[13:59] <dakj__> let's keep in touch
[13:59] <dakj__> let's keep in touch
[14:02] <dakj__> jamespage: but is there a way to organise in juju gui the location of applications? https://pasteboard.co/8aTf5bV5Z.png
[14:03] <rick_h> dakj__: if you move them it should update their locations as annotations in the juju db and remember them next time
[14:10] <dakj__> rick_h: ok thanks I thought that in automatic juju makes that.
[14:11] <dakj__> rick_h: have you had any experiences with the deploy of landscape dense-Maas?
[14:15] <rick_h> dakj__: sorry, nothing I can add to the askubuntu question
[14:15] <dakj__> rick_h: thanks, I've already done that, but never answer.
[14:16] <dakj__> rick_h: the issue is with HAproxy and the post is that https://askubuntu.com/questions/906763/haproxy-reverseproxy-relation-changed-for-landscape-serverwebsite
[14:16] <rick_h> dakj__: I'll see if I can ping someone to look at it
[14:18] <dakj__> rick_h: thanks a lot. It'd be a great favour. Because I looking for any post or fix about that or someone that to deploy the bundle.
[16:33] <Zic> hmm, I have a strange bug with Juju GUI (used locally, without JAAS), it keeps trying to open the authentification page of Ubuntu One to log me in, but I don't need it since I'm running it fully on baremetal / auto-hosted Juju
[16:33] <Zic> is this normal?
[16:35] <hatch> Hi Zic  I understand you're having an issue trying to log in to the GUI
[16:36] <hatch> when you open the GUI, you're visiting the link form the `juju gui` command output?
[16:37] <Zic> nop, I can correctly log into the juju GUI, actually I am already seeing the GUI, but randomly, when I click to configure unit, it tries to open a popup window with Ubuntu One authentification
[16:37] <Zic> I think it's useful for JAAS
[16:37] <Zic> but not for a local/private Juju
[16:37] <Zic> I just close the auth window to Ubuntu One and continue to use Juju GUI normally after, but sometime it re-open...
[16:38] <hatch> ohh interesting
[16:38] <hatch> Zic can you check the GUI version by hitting Shift+?
[16:38] <hatch> it'll be in the bottom left of that window
[16:39] <Zic> the requested action from ubuntu One account is " Juju Charms API log in with Ubuntu One "
[16:39] <hatch> are you using private charms from the charmstore?
[16:39] <Zic> nop, just canonical-kubernetes
[16:40] <Zic> Shift+ does not do anything :(
[16:40] <Zic> oh, oops, Shift+?, sorry, I'm trying again
[16:40] <Zic> 2.6.0
[16:41] <hatch> alright thanks, I'mjust trying to narrow it down
[16:41] <hatch> one more question, is this on MAAS?
[16:42] <Zic> nop, manual provisionning
[16:42] <hatch> ok sorry one more :)
[16:42] <hatch> does ithappen at a usual interval?
[16:42] <Zic> (as we have our own MaaS-like that is not connected to Juju but to all of our internal tools :/ I will gladly embrace MaaS otherway)
[16:43] <hatch> sure, no problem
[16:43] <Zic> hatch: it seems that it happens when I do this path for example : click on easyrsa charm, Units, scale, then "back, back back" and sometime it open up at this stage
[16:44] <Zic> ah better than this example
[16:44] <Zic> this one is always reproductible:
[16:44] <Zic> click on kube-api-loadbalancer
[16:45] <Zic> (to configure it)
[16:45] <Zic> it always open Ubuntu One popup
[16:46] <hatch> thanks a lot Zic we'll look into this
[16:48] <Zic> a totally different question/problem: can I deploy and old revision of charms (or a charm-bundle, if it deploys old charms with it) through the Juju GUI?
[16:48] <Zic> maybe I'm wrong but I saw this option on old Juju GUI version
[16:49] <Zic> I don't find it in the 2.6.0 :s
[16:49] <Zic> (I need to deploy an old CDK bundle to test the upgrade)
[16:49] <Zic> the one with .deb/1.5.3 of Kubernetes
[16:49] <Zic> cc @ lazyPower
[16:49] <hatch> Zic here is the issue I created to track this https://github.com/juju/juju-gui/issues/2922
[16:50] <Zic> thanks a lot hatch
[16:50] <hatch> Zic you can get to old versions of a bundle by specifying it in the url https://jujucharms.com/canonical-kubernetes/37
[16:50] <hatch> the last version
[16:50] <hatch> for example
[16:51] <Zic> does a old charm-bundle deployed its old-charm with it?
[16:51] <Zic> (it seems logic actually but... just to be sure)
[16:52] <hatch> Zic depends on the bundle - if the bundle defines the expicit version then yes
[16:52] <hatch> if it doesn't then it'll deploy the latest
[16:52] <Zic> it's about canonical-kubernetes
[16:52] <hatch> Zic you can also download the bundle.yaml then modify it and import it into the GUI to deploy
[16:52] <Zic> hmm, it's a good idea
[16:53] <Zic> I will export through the old GUI of my production cluster the bundle.yaml
[16:53] <Zic> and then load it into the new Juju with Import
[16:53] <hatch> yep you can definitely do that
[16:53] <hatch> Zic would you be able to check one more thing around this login popup bug?
[16:54] <Zic> yup
[16:54] <hatch> after opening the browser console ctrl+alt+i and switching to the Network tab, perform the action which causes the popup to open
[16:54] <hatch> and then paste the output into that github issue
[16:55] <hatch> seeing the network requests will help us narrow down the issue
[17:00] <Zic> hatch: do you want a screenshot or an export ?
[17:00] <Zic> because I can't see any option ton export as text :(
[17:01] <hatch> Zic screenshot should be fine, thanks a lot!
[17:03] <Zic> hatch: added
[17:04] <hatch> ahah!
[17:04] <hatch> thanks Zic we'll get this resolved
[17:04] <Zic> (for info, it's not really https://localhost:8080/, it's an SSH tunneling to the Juju GUI pointing to juju:17070 :)
[17:04] <hatch> yeah that's fine, I can actually see the problem already :)
[17:05] <hatch> we'll try and get this fix into the next release
[17:05] <Zic> thanks :)
[18:47] <hetfield> hi all, i have issues with ceph deployed with juju
[18:48] <hetfield> when i try to issue any rbd/rados command it asks for key as it's not in default patch, when i pass the proper key it gives me auth failures
[18:49] <hetfield> but it's very strange as openstack plat works, it happens when i log in a unit and issue commands as root
[19:45] <hetfield> no one?
[19:52] <zeestrat> hetfield: You checked in #openstack-charms ?
[19:53] <hetfield> no :)