[08:43] mo'in [08:43] Morning [08:44] I am on a do or die mission to refactor the prometheus charm (except tests) this week. I hope I will have a decent burial and epitaph if I fail. [09:24] * bthomas needs more coffeeeeeeeee [09:36] bthomas: what do you want on the wreath? [09:36] bthomas: I'd go with YOLO [09:43] Chipaca: I always fancied "Shit Happens" from "Forrest Gump" (love that movie). [10:10] * Chipaca bbiab [11:57] Chipaca, sorry for the late notice but I don't have much to add from our last meeting. I'd suggest we skip it unless you have something you'd like to discuss ? [11:58] gnuoy: skip it is :) [11:59] ack, thanks [11:59] gnuoy: you miss out on watching me do unthinkable things to a rather boring wrap [13:02] I'm sorry to have missed that :) [13:14] you're really not :) [14:39] Chipaca, I am facing weird issue where I set the PodSpec and run the k8s status in loop since workload status is unknown is fine. But it stops executing the pods nd pod never gets created? [14:40] narindergupta: juju's pod spec setting is transactional [14:40] narindergupta: i.e. it's applied after your hook finishes successfully [14:40] narindergupta: if you set it, and then wait for it to change in a loop in the same hook, you're going to have a bad time [14:40] Chipaca, oh ok do we have update_status event? [14:41] narindergupta: if you mean an event in juju about k8s status, not currently [14:41] Thinking of changing in update_status in that case? [14:42] Oh ok so what's the solution as if now as my juju unit status is unknown while my pod status is active [14:43] narindergupta: there is update-status, which runs every 5 minutes on a clock [14:44] I think I can use that atleast so that status gets updated after 5 minutes. [14:44] And once we have solution then I will switch in charm [14:45] narindergupta: you can make that shorter, see https://discourse.juju.is/t/whats-the-update-status-interval/2571/5?u=chipaca [14:47] Chipaca, Hun it is a model level change. [14:48] yes [15:04] jldev: you around? [15:06] Chipaca: yes [15:06] jldev: i'm wanting to leave a little early to get to physio on time, meaning i wouldn't be there at the start of the standup [15:07] jldev: were you planning on going to the standup? [15:07] Chipaca: yes, I'll be there, no worries if you need to leave early [15:07] jldev: ok :) [15:07] jldev: (i would've stayed if you couldn't make it, and just made it super-fast) [16:58] Chipaca, looks like pod status is fetching me unknown everytime from k8s which is weired. While kubectl pod status returns active [17:22] narindergupta: what's "pod status" in the above? [17:22] as opposed to "kubectl pod status" i mean [17:22] Chipaca, I am testing k8s api using the operator library [17:23] Chipaca, k8s_pod_status = k8s.get_pod_status(juju_model=juju_model, [17:23] juju_app=juju_app, [17:23] juju_unit=juju_unit) [17:24] Do I need to encode the parameters? [17:27] narindergupta: and what does the kubectl pod status output? (with -o yaml please) [17:28] narindergupta: or, better, what is in the debug log? [17:28] as that'll include the output i'm looking for [17:29] narindergupta: i'm looking for the log line that starts 'Received k8s pod status: ' [17:30] Chipaca, unit.charm-k8s-cassandra/0.juju-log Received k8s pod status: [17:30] hah, lolfail [17:30] i'll file a bug about that [17:30] :) [17:30] narindergupta: sorry, i'm going to need the output of the kubectl command [17:30] Chipaca, yes just a second [17:31] microk8s.kubectl get pods -n cassandra [17:31] NAME READY STATUS RESTARTS AGE [17:31] charm-k8s-cassandra-0 1/1 Running 0 25m [17:31] charm-k8s-cassandra-operator-0 1/1 Running 0 25m [17:31] modeloperator-7d75bd7454-tfzgn 1/1 Running 0 25m [17:31] It says running [17:34] narindergupta: can you do: microk8s.kubectl get pods -n cassandra -o yaml | pastebinit -f yaml [17:34] narindergupta: (you might need to apt install pastebinit first) [17:34] Chipaca, somehow pastebinit is not working on this node [17:34] ah [17:34] I tried to upgrade this to 20.4 and pastbinit messed up now [17:35] ok, so copy-paste into a pastebin, please [17:35] Yeah will do that give me a sec [17:35] narindergupta: and then show me more context of your code above that called get_pod_status, so i know the values of juju_model, _app, and _unit :) [17:36] Sure [17:36] Those value I am getting from model but let me give you that first [17:36] juju_model = self.model.name [17:36] juju_app = self.model.app.name [17:36] juju_unit = self.model.unit [17:50] Chipaca, https://paste.ubuntu.com/p/XqTrYrtBZR/ [17:51] Chipaca, I have pasted only the Cassandra-pod status rather than operators as well. [17:52] narindergupta: charm-k8s-cassandra is the juju app name? [17:52] currect [17:52] correct [17:53] And I can see the same in pod labels as well [17:53] Model is cassandra [17:53] narindergupta: and charm-k8s-cassandra/0 is the unit? [17:53] Yes correct [17:53] juju status [17:53] Model Controller Cloud/Region Version SLA Timestamp [17:53] cassandra k8s-cloud microk8s/localhost 2.8.1 unsupported 17:53:30Z [17:53] App Version Status Scale Charm Store Rev OS Address Notes [17:53] charm-k8s-cassandra gcr.io/google-samples/cassa... active 1 charm-k8s-cassandra local 0 kubernetes 10.152.183.66 [17:53] Unit Workload Agent Address Ports Message [17:53] charm-k8s-cassandra/0* maintenance idle 10.1.16.172 7000/TCP,7001/TCP,9042/TCP Waiting for pod to appear [17:54] narindergupta: from here it looks like it should work [17:55] Chipaca, let me send you my charm code as well in case that helps [17:55] narindergupta: can you send me an email, with the charm code and how to deploy it on microk8s? [17:55] Chipaca, ok [17:55] I'll review it as soon as I can (which is in no less than ~2 hours) [17:55] thanks! [17:55] Chipaca, https://paste.ubuntu.com/p/s7FYfVqKKZ/ [17:56] My charm code and I will send you in email as well [17:57] Let me checkin into GitHub so ti would be easy for you to grab the whole charm [18:13] Chipaca, sent you an email and attached the charm code [19:55] narindergupta: reproduced the issue, debugging it now [19:55] Ok cool [20:33] narindergupta: i think i'm seeing something different though [20:34] narindergupta: my kubectl shows the cassandra pod in CrashLoopBackOff [20:35] * Chipaca reinstalls just in case [20:41] narindergupta: yeah, it's always stuck in CrashLoopBackOff here [20:42] Chipaca, in meeting with manager [20:42] Chipaca, I can have a look after that but run juju debug-log and see [20:46] narindergupta: i think it might be that i was running an older juju [20:46] forgot i'd downgraded to test something else :) [20:46] will fix [20:46] Chipaca, :) [20:46] I am sing 2.8.1 [21:00] narindergupta: and now i got it to status:active [21:00] ah, there we go (spoke to soon? was impatient?) [21:05] Chipaca, still having crash [21:06] no, now the kube side works [21:06] Chipaca, ok cool [21:06] so now i can look at the data and see what the k8s library is getting wrong :) [21:06] But k8 api library show unknown [21:06] Ok cool [21:06] juju status [21:06] Model Controller Cloud/Region Version SLA Timestamp [21:06] cassandra k8s-cloud microk8s/localhost 2.8.1 unsupported 21:05:31Z [21:06] App Version Status Scale Charm Store Rev OS Address Notes [21:06] charm-k8s-cassandra gcr.io/google-samples/cassa... active 1 charm-k8s-cassandra local 0 kubernetes 10.152.183.223 [21:06] Unit Workload Agent Address Ports Message [21:06] charm-k8s-cassandra/0* maintenance idle 10.1.16.177 7000/TCP,7001/TCP,9042/TCP Waiting for pod to appear [21:07] Hopefully you have the same status [21:20] Chipaca, I am stepping out please send me email if I do not respond on this channel [21:22] narindergupta: found the problem [21:24] or at least _a_ problem :-) seeing if it fixed it now [21:25]