[01:37] <rick_h> infinityplusb: try doing a juju resolved xxxx and such to get it through the error so that it can start the destroy.
[01:37] <rick_h> infinityplusb: if it's on a system where it's the only application on the machine you can remove-machine --force
[01:37] <rick_h> and skip the application side of things
[06:45] <ak_dev> juju remove-application <name> should do
[06:45] <ak_dev> ^infinityplusb
[06:49] <ak_dev> If u want to remove the entire machine, use
[06:49] <ak_dev> juju remove-machine <machine_num> --force
[07:36] <ak_dev> kjackal: are you online now?
[08:02] <kjackal___> ak_dev: I am here!
[08:03] <kjackal___> Could you explain where open_port is failing? I did not get that
[08:03] <kjackal___> ak_dev: ^
[08:03] <ak_dev> kjackal___: Hey!
[08:04] <ak_dev> I tested the charm on CENGN pod, and I had open_port call in the principal master and worker charms
[08:04] <jwd> morning
[08:04] <ak_dev> i will forward you the bundle so you can see for yourself
[08:05] <ak_dev> https://www.irccloud.com/pastebin/Z6EA1mtY/
[08:06] <ak_dev> do change the "gateway-physical-interface" option if you are deploying
[08:07] <ak_dev> kjackal___:
[08:07] <ak_dev> kjackal:
[08:07] <ak_dev> ^^
[08:09] <kjackal___> ak_dev: I am not sure what "on CENGN" refers to. These acronyms do not ring a bell
[08:11] <kjackal___> especially combined with the "pod"
[08:11] <ak_dev> kjackal___: oh sorry, it is this https://wiki.opnfv.org/display/pharos/CENGN+Hosting
[08:11] <ak_dev> something like GCE where we can test the charms
[08:11] <kjackal___> you tested a charm on a pod? We are talking about a kubernetes pod
[08:12] <ak_dev> even I was super confused the first time someone mentioned this
[08:12] <kjackal___> ak_dev: Ah I see now, is it a openstack cloud?
[08:13] <ak_dev> yeah, if I understood it correctly
[08:14] <kjackal___> and you are saying the opn_port does not work if you call it from within a charm but it works if you open-ports from the cli... strange...
[08:14] <ak_dev> oh, no I did not try from cli
[08:14] <kjackal___> do you get anything on the logs?
[08:14] <ak_dev> how do I do that?
[08:15] <kjackal___> juju run --application kubernetes-worker open-port "1234/tcp"
[08:15] <kjackal___> ak_dev: ^
[08:15] <ak_dev> kjackal___: oh okay, i will redeploy and try that and get back to you
[08:15] <ak_dev> :-)
[08:16] <kjackal___> lets see, which charm are you deploying?
[08:16] <ak_dev> the bundle I forwarded you earlier
[08:16] <ak_dev> kjackal___: ^
[08:18] <kjackal___> ak_dev: I do not see open_port on the ovn-5 charm
[08:18] <ak_dev> I put it in the kubernetes-master charm
[08:19] <ak_dev> kjackal___: and the kubernetes-worker charm
[08:21] <kjackal___> ok, I see!
[08:22] <kjackal___> 8080, 6641 and 6642
[08:22] <kjackal___> let me try to deploy
[08:22] <ak_dev> kjackal___: yeah, sure, I too am trying here
[08:24] <ak_dev> kjackal___: is it possible to open_port for a machine rather than an application?
[08:25] <kjackal___> ak_dev: do a juju run --help
[08:25] <kjackal___> there is a --machine option, it should work
[08:26] <ak_dev> kjackal___: oh okay
[08:30] <ak_dev> "juju run --machine 2 open-port "6641/tcp"
[08:31] <ak_dev> says command not found
[08:31] <ak_dev> kjackal___:
[08:31] <kjackal___> probably because you are out of the context of an application...
[08:32] <ak_dev> kjackal___: oh, I don't know what that mean actually, is it that I am not allowed to run such a command on the machine?
[08:33] <ak_dev> the OVN subprdinate charm the principal charm both require the same ports to be open
[08:33] <kjackal___> I _think_ open-port is not in your path if you are not running it within an --application
[08:33] <ak_dev> ah I see
[08:34] <ak_dev> okay so if the subordinate OVN charm requires 6641 to be open and I open it in master, will  that work?
[08:34] <ak_dev> I don't think so, but still asking for confirmation
[08:38] <kjackal___> you would need it open on the workers as well, right?
[08:38] <ak_dev> worker i require only 8080
[08:41] <ak_dev> kubernetes-master : 8080
[08:41] <ak_dev>       ovn : 6641, 6642
[08:41] <ak_dev> kubernetes-worker : 8080
[08:41] <ak_dev>       ovn : 6641, 6642
[08:41] <ak_dev> kjackal___: I think this is what should work
[08:42] <ak_dev> i just modified to have open ports in all three charms according to above, lets see if that helps
[08:53] <jwd> anyone know what a status: unknown means?
[09:02] <ak_dev> jwd: I think that means that the charm did not update its status, but kjackal___ might know better
[09:03] <ak_dev> kjackal__: that did not work
[09:03] <jwd> i do a wild test anyway atm. using lxd on debian to run juju :-)
[09:03] <kjackal__> anything interesting in the logs? Did you see the ports opening?
[09:04] <kjackal__> are you deploying kubernetes on lxd?
[09:04] <ak_dev> nope, its on GCE
[09:04] <ak_dev> should i check /var/log/syslog?
[09:05] <kjackal__> /var/log/syslog and /var/log/juju/unit-?????
[09:06] <kjackal__> jwd: you mean you got an lxd container and inside there you deploy juju?
[09:07] <jwd> no i use juju to create lxd containers for me
[09:07] <kjackal__> jwd: you might find deploying juju through snaps an easier way if you want to move away from ubuntu
[09:07] <jwd> i used snaps
[09:07] <kjackal__> nice!
[09:08] <jwd> running it on debian stretch
[09:08] <jwd> just wondered why the machine states always ends in unknown
[09:08] <jwd> evenn tho all is working
[09:09] <kjackal__> jwd: the machine state is set by the charm. All of the charms you deployed ended on an unknown state? Or was it only one?
[09:10] <jwd> all
[09:10] <kjackal__> that is strange
[09:10] <kjackal__> you would better file a bug
[09:10] <jwd> https://pastebin.com/jfRED3Wd
[09:10] <jwd> just a few things i tested
[09:12] <kjackal__> I think filing a bug and including all the logs is the right way to go
[09:12] <jwd> kk
[09:12] <ak_dev> kjackal__: okay so nothing about opening ports on syslog
[09:12] <ak_dev> and neither see it in juju log
[09:13] <ak_dev> just checked the code, it has the open_port function call
[09:13] <ak_dev> 8080 port opened on both though, verified in the GCE firewall rules
[09:13] <kjackal__> ak_dev: this does not make sense, my deployment on aws did open ports properly
[09:14] <ak_dev> I might be doing something really silly then, I will recheck everything
[09:14] <ak_dev> did kubernetes run on your deployment?
[09:14] <kjackal__> ak_dev: let me see, there must be an open_ports (with an s) call
[09:15] <kjackal__> ak_dev: no it did not work because it did not have easyrsa i think on the bundle
[09:15] <ak_dev> kjackal__: ah no, I have implemented it inside the kubernetes-master charm
[09:15] <ak_dev> if you still have it, could you try "sudo kubectl get pods" on master?
[09:16] <kjackal__> I got a "hook failed: "cni-relation-joined" for ovn:cni"
[09:16] <kjackal__> on the ovn subordinate of the master
[09:16] <ak_dev> oh
[09:17] <ak_dev> let me check
[09:17] <kjackal__> ak_dev: and I have this error in the logs: http://pastebin.ubuntu.com/25124335/
[09:19] <ak_dev> kjackal__: what is the gateway-physical-interface you are using?
[09:21] <kjackal__> ak_dev: I did not set one. It is what the smart default  is
[09:23] <ak_dev> looks like I did not use the default right
[09:23] <ak_dev> kjackal__: thanks for pointing out that error though! Seems like the errors dont ever end on this one!
[09:23] <kjackal__> ak_dev: remind me again how to get the proper gateway
[09:25] <ak_dev> ip route | grep default
[09:26] <ak_dev> this should do
[09:26] <ak_dev> kjackal__: ^
[09:28] <kjackal__> ip route | grep default
[09:28] <kjackal__> default via 172.31.0.1 dev breth0
[09:28] <kjackal__> And it is the breth0, right ak_dev
[09:28] <kjackal__> ?
[09:29] <ak_dev> kjackal__: did you run it on the node where this charm ran?
[09:29] <ak_dev> cause it creates a new interface with 'br' as prefix
[09:30] <kjackal__> yes
[09:31] <ak_dev> kjackal__: so i guess your interface should be eth0
[09:31] <ak_dev> kjackal__: but its strange that it created the interface, since it should not after you got that error
[09:31] <ak_dev> I too am confused now
[10:09] <BlackDex> Hello there, i currently have a HA Juju env, but i need to remove two of those nodes and i'm currently not able to create a new node for more HA
[10:09] <BlackDex> What is the best way to undo the HA of juju, and keep just one running?
[12:12] <rick_h> BlackDex: what cloud is this?
[12:44] <BlackDex> what cloud?
[12:57] <rick_h> BlackDex: what provider? is this on AWS, GCE, an openstack?
[12:57] <BlackDex> ah
[12:57] <BlackDex> maas/openstack :)
[12:57] <BlackDex> i think i have it working btw
[12:57] <BlackDex> :)
[12:57] <rick_h> BlackDex: oh ok awesome
[12:58] <BlackDex> needed to do some manual mongodb stuff
[13:40] <tvansteenburgh> jacekn: mthaddon: you guys want/need to meet? sorry, I got hung up in a sprint session but i'm available now if you want to hangout
[13:41] <tvansteenburgh> or we can just wait til next week, either way is fine
[15:29] <ak_dev> hello, any reason why kube-proxy can fail on kubernetes-worker ?
[15:29] <ak_dev> what I mean by fail is fail to start
[15:31] <ak_dev> https://www.irccloud.com/pastebin/yWnQ1FoP/
[15:31] <ak_dev> that is what i get
[16:04] <vds> Hi all, suggestion on how to debug a reactive charm that is not registering a nagios hook? here's the reactive module https://paste.ubuntu.com/25126369/ apparently the relations are added correctly but the nagios check is not registered
[16:20] <Budgie^Smore> o/ juju world
[17:50] <rick_h> Reminder juju show in 10minutes
[17:50] <rick_h> Getting stuff setup to chat
[17:56] <rick_h> https://www.youtube.com/watch?v=3lcl51SVX2E for watching and https://hangouts.google.com/hangouts/_/gccrypbjbbbcrniklqvt2gkjcue for chatting live!
[20:41] <ak_dev> hello
[20:42] <ak_dev> one more question guys
[20:42] <ak_dev> does a subordinate charm receive events from the principal charm?
[20:44] <ak_dev> thedac: ^^
[20:45] <thedac> ak_dev: it can if there is a relationship defined and data is passed across that relationship, but that has to be coded.
[20:45] <ak_dev> oh, I have a cni relationship
[20:45] <ak_dev> basically, its kubernetes-cni b/w my charm and kubernetes-master
[20:45] <ak_dev> thedac: ^
[20:46] <ak_dev> but it has to be coded in the relationship you mean?
[20:46] <narindergupta> thedac, can u show any example if there is any?
[20:46] <thedac> ak_dev: yes, the charm has to set something on the relationship and the other side needs to react
[20:47] <narindergupta> ak_dev, in that case you can use the existing event in kubernetes-master and kubernetes-worker
[20:48] <ak_dev> thedac: thanks :-)
[20:48] <thedac> no problem