[01:37] infinityplusb: try doing a juju resolved xxxx and such to get it through the error so that it can start the destroy. [01:37] infinityplusb: if it's on a system where it's the only application on the machine you can remove-machine --force [01:37] and skip the application side of things === mup_ is now known as mup [06:45] juju remove-application should do [06:45] ^infinityplusb [06:49] If u want to remove the entire machine, use [06:49] juju remove-machine --force [07:36] kjackal: are you online now? [08:02] ak_dev: I am here! [08:03] Could you explain where open_port is failing? I did not get that [08:03] ak_dev: ^ [08:03] kjackal___: Hey! [08:04] I tested the charm on CENGN pod, and I had open_port call in the principal master and worker charms [08:04] morning [08:04] i will forward you the bundle so you can see for yourself [08:05] https://www.irccloud.com/pastebin/Z6EA1mtY/ [08:06] do change the "gateway-physical-interface" option if you are deploying [08:07] kjackal___: [08:07] kjackal: [08:07] ^^ [08:09] ak_dev: I am not sure what "on CENGN" refers to. These acronyms do not ring a bell [08:11] especially combined with the "pod" [08:11] kjackal___: oh sorry, it is this https://wiki.opnfv.org/display/pharos/CENGN+Hosting [08:11] something like GCE where we can test the charms [08:11] you tested a charm on a pod? We are talking about a kubernetes pod [08:12] even I was super confused the first time someone mentioned this [08:12] ak_dev: Ah I see now, is it a openstack cloud? [08:13] yeah, if I understood it correctly [08:14] and you are saying the opn_port does not work if you call it from within a charm but it works if you open-ports from the cli... strange... [08:14] oh, no I did not try from cli [08:14] do you get anything on the logs? [08:14] how do I do that? [08:15] juju run --application kubernetes-worker open-port "1234/tcp" [08:15] ak_dev: ^ [08:15] kjackal___: oh okay, i will redeploy and try that and get back to you [08:15] :-) [08:16] lets see, which charm are you deploying? [08:16] the bundle I forwarded you earlier [08:16] kjackal___: ^ [08:18] ak_dev: I do not see open_port on the ovn-5 charm [08:18] I put it in the kubernetes-master charm [08:19] kjackal___: and the kubernetes-worker charm [08:21] ok, I see! [08:22] 8080, 6641 and 6642 [08:22] let me try to deploy [08:22] kjackal___: yeah, sure, I too am trying here [08:24] kjackal___: is it possible to open_port for a machine rather than an application? [08:25] ak_dev: do a juju run --help [08:25] there is a --machine option, it should work [08:26] kjackal___: oh okay [08:30] "juju run --machine 2 open-port "6641/tcp" [08:31] says command not found [08:31] kjackal___: [08:31] probably because you are out of the context of an application... [08:32] kjackal___: oh, I don't know what that mean actually, is it that I am not allowed to run such a command on the machine? [08:33] the OVN subprdinate charm the principal charm both require the same ports to be open [08:33] I _think_ open-port is not in your path if you are not running it within an --application [08:33] ah I see [08:34] okay so if the subordinate OVN charm requires 6641 to be open and I open it in master, will that work? [08:34] I don't think so, but still asking for confirmation [08:38] you would need it open on the workers as well, right? [08:38] worker i require only 8080 [08:41] kubernetes-master : 8080 [08:41] ovn : 6641, 6642 [08:41] kubernetes-worker : 8080 [08:41] ovn : 6641, 6642 [08:41] kjackal___: I think this is what should work [08:42] i just modified to have open ports in all three charms according to above, lets see if that helps [08:53] anyone know what a status: unknown means? [09:02] jwd: I think that means that the charm did not update its status, but kjackal___ might know better [09:03] kjackal__: that did not work [09:03] i do a wild test anyway atm. using lxd on debian to run juju :-) [09:03] anything interesting in the logs? Did you see the ports opening? [09:04] are you deploying kubernetes on lxd? [09:04] nope, its on GCE [09:04] should i check /var/log/syslog? [09:05] /var/log/syslog and /var/log/juju/unit-????? [09:06] jwd: you mean you got an lxd container and inside there you deploy juju? [09:07] no i use juju to create lxd containers for me [09:07] jwd: you might find deploying juju through snaps an easier way if you want to move away from ubuntu [09:07] i used snaps [09:07] nice! [09:08] running it on debian stretch [09:08] just wondered why the machine states always ends in unknown [09:08] evenn tho all is working [09:09] jwd: the machine state is set by the charm. All of the charms you deployed ended on an unknown state? Or was it only one? [09:10] all [09:10] that is strange [09:10] you would better file a bug [09:10] https://pastebin.com/jfRED3Wd [09:10] just a few things i tested [09:12] I think filing a bug and including all the logs is the right way to go [09:12] kk [09:12] kjackal__: okay so nothing about opening ports on syslog [09:12] and neither see it in juju log [09:13] just checked the code, it has the open_port function call [09:13] 8080 port opened on both though, verified in the GCE firewall rules [09:13] ak_dev: this does not make sense, my deployment on aws did open ports properly [09:14] I might be doing something really silly then, I will recheck everything [09:14] did kubernetes run on your deployment? [09:14] ak_dev: let me see, there must be an open_ports (with an s) call [09:15] ak_dev: no it did not work because it did not have easyrsa i think on the bundle [09:15] kjackal__: ah no, I have implemented it inside the kubernetes-master charm [09:15] if you still have it, could you try "sudo kubectl get pods" on master? [09:16] I got a "hook failed: "cni-relation-joined" for ovn:cni" [09:16] on the ovn subordinate of the master [09:16] oh [09:17] let me check [09:17] ak_dev: and I have this error in the logs: http://pastebin.ubuntu.com/25124335/ [09:19] kjackal__: what is the gateway-physical-interface you are using? [09:21] ak_dev: I did not set one. It is what the smart default is [09:23] looks like I did not use the default right [09:23] kjackal__: thanks for pointing out that error though! Seems like the errors dont ever end on this one! [09:23] ak_dev: remind me again how to get the proper gateway [09:25] ip route | grep default [09:26] this should do [09:26] kjackal__: ^ [09:28] ip route | grep default [09:28] default via 172.31.0.1 dev breth0 [09:28] And it is the breth0, right ak_dev [09:28] ? [09:29] kjackal__: did you run it on the node where this charm ran? [09:29] cause it creates a new interface with 'br' as prefix [09:30] yes [09:31] kjackal__: so i guess your interface should be eth0 [09:31] kjackal__: but its strange that it created the interface, since it should not after you got that error [09:31] I too am confused now [10:09] Hello there, i currently have a HA Juju env, but i need to remove two of those nodes and i'm currently not able to create a new node for more HA [10:09] What is the best way to undo the HA of juju, and keep just one running? [12:12] BlackDex: what cloud is this? [12:44] what cloud? [12:57] BlackDex: what provider? is this on AWS, GCE, an openstack? [12:57] ah [12:57] maas/openstack :) [12:57] i think i have it working btw [12:57] :) [12:57] BlackDex: oh ok awesome [12:58] needed to do some manual mongodb stuff [13:40] jacekn: mthaddon: you guys want/need to meet? sorry, I got hung up in a sprint session but i'm available now if you want to hangout [13:41] or we can just wait til next week, either way is fine === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk [15:29] hello, any reason why kube-proxy can fail on kubernetes-worker ? [15:29] what I mean by fail is fail to start [15:31] https://www.irccloud.com/pastebin/yWnQ1FoP/ [15:31] that is what i get [16:04] Hi all, suggestion on how to debug a reactive charm that is not registering a nagios hook? here's the reactive module https://paste.ubuntu.com/25126369/ apparently the relations are added correctly but the nagios check is not registered [16:20] o/ juju world [17:50] Reminder juju show in 10minutes [17:50] Getting stuff setup to chat [17:56] https://www.youtube.com/watch?v=3lcl51SVX2E for watching and https://hangouts.google.com/hangouts/_/gccrypbjbbbcrniklqvt2gkjcue for chatting live! [20:41] hello [20:42] one more question guys [20:42] does a subordinate charm receive events from the principal charm? [20:44] thedac: ^^ [20:45] ak_dev: it can if there is a relationship defined and data is passed across that relationship, but that has to be coded. [20:45] oh, I have a cni relationship [20:45] basically, its kubernetes-cni b/w my charm and kubernetes-master [20:45] thedac: ^ [20:46] but it has to be coded in the relationship you mean? [20:46] thedac, can u show any example if there is any? [20:46] ak_dev: yes, the charm has to set something on the relationship and the other side needs to react [20:47] ak_dev, in that case you can use the existing event in kubernetes-master and kubernetes-worker [20:48] thedac: thanks :-) [20:48] no problem