[06:57] jamespage: Got some spare time for some administrative work on charm-helpers? https://github.com/juju/charm-helpers/issues/27 === rogpeppe1 is now known as rogpeppe [14:30] Hi! [14:31] Hi! I have a charm nova-compute with lvm storage backend. There is no option to provide images_volume_group = my-volumes, I have to put it myself to nova.conf in libvirt section. How can I dissallow juju to roll back my changes in this file. [14:31] please let me know, it is actually very seroius problem. [15:16] elmaciej: You can use the https://jujucharms.com/nova-compute/#charm-config-config-flags option in the nova-compute charm to set options in nova.conf [15:26] hello folks [15:26] whats the Canal support like for CDK if I swap out flannel? [15:27] tvansteenburgh: might have an idea ^ [15:27] i see a couple of issues on github, are they the edge cases, or the norm? :) [15:29] rick_h: i haven't forgotten the CLI feedback stuff, just been mega busy recently [15:29] honest! [15:29] magicaltrout: all good, sorry for the bugging but wasn't sure you were back from holiday/got lost in the holiday mailbox, etc [15:29] magicaltrout: and I know of everyone out there you had the biggest eye for it :P [15:30] yeah, its pretty epic, I did get a 2 minute demo of it via hangout a few months ago, it looked great. I'm looking forward to getting to tinker [15:31] magicaltrout: nice. When you get time we're eager if this is solving your issues and doing what you need. Thanks for checking it out [15:34] no problem, we've got some new stuff coming to Juju soon, our new platform called Anssr which is a scalable natural language processing platform aimed at discovering personally identifiable information for the new GDPR legislation coming into force in may [15:35] Juju will be great at dealing with the server components for those companies who use Cloud services [15:37] magicaltrout: very cool, when you're ready I've love to see a demo and what's up sometime [15:39] indeed, indeed! === Spads_ is now known as Spads [16:35] hi there [16:35] anyone from the K8s team around here? [16:39] SaMnCo: hi there, long time no chat. :-) It looks like kjackal and kwmonroe are both logged in, and they're both doing k8s stuff now. Not sure if either of them is paying attention, though. [16:43] heeyyy!! Yes, been busy on other stuff [16:43] I am trying to get the HPA working with custom metrics and I am STRUGGLING big time [16:44] wanted to discuss a few things [16:45] Cool. My k8s knowledge is still pretty basic, so I'm probably not useful. Hopefully, one of those two cats will see my ping. [16:46] petevg: as a technical sales pro... surely you must know everything [16:46] how else can you sell stuff? [16:46] oh openstackers :) [16:46] i forgive you [16:46] now learn some kubernetes and help the man out [16:46] magicaltrout: I'm working on it :-p [16:48] what is your question, SaMnCo? [16:48] knobby: a github issue says a 1000 words: https://github.com/DirectXMan12/k8s-prometheus-adapter/issues/12 [16:49] last comments I made [16:50] For some reason I cannot get the controller manager to read the metrics from the Metrics API Server nor the Custom Metrics API Server [16:50] both are registered correctly and I can see the values right from calling the API [16:50] but the HPA cannot leverage them [16:50] and I cannot start to figure out what is going on [16:51] it seems that the HPA in the Controller Manager keeps trying to hit http://heapster as a resource metrics value despite using --horizontal-pod-autoscaler-use-rest-clients [16:52] but even at max log level I do not have any error anywhere but in the HPA events [16:52] So that is for one [16:53] the other is that I have weird RBAC errors which do not match the RBAC profile of CDK: [16:53] https://www.irccloud.com/pastebin/advBNfeG/ [16:53] All these rights are covered by the RBAC for system:node but they keep coming [16:54] have you guys started working on Custom Metrics? [16:58] so you have a horizontal autoscaler and you're trying to reach into the heapster pod to ask about request counts so it can scale, right? [16:59] am I reading it correctly that your controller manager is outside the cluster? [16:59] and it's trying to use an internal cluster ip to hit heapster? [17:00] yeah exactly [17:00] btw for rbac: https://github.com/kubernetes-incubator/bootkube/issues/483 [17:01] update the default rbac manifest for system:node binding to: [17:01] https://www.irccloud.com/pastebin/PQrRs7zm/ [17:01] will solve this problem, I am guessing others have it [17:02] opening github issue now [17:05] I always use services to get to things from outside my bare-metal cluster. It's either that or a nodeport really. I think you'll have to expose the pod via a service and then use that service ip. [17:05] ip routing to your cluster for your service addresses would be required [17:07] but in theory according to the docs, using the flag horizontal-pod-autoscaler-use-rest-clients on the controller-manager should tell it to talk to the API server [17:07] I have an appointment now, but I'll check back in an hour or so. kjackal would be the one to talk about RBAC. The issue will help track it. [17:07] SaMnCo, how does it auth? [17:07] but for some reason it keeps hitting the heapster [17:07] filled in https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/475 [17:10] knobby: I am using a specific Service Account with a custom RBAC profile [17:23] OK I finally nailed the issue (it is my 4th day on this) [17:23] It all goes back to a bug in the scaling mechanism of masters [17:23] WIll explain in a another GH issue [17:44] for those interested: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/476 [19:40] thanks SaMnCo, glad you got it figured out and thanks for the bug === agprado_ is now known as agprado === agprado is now known as agprado|afk [21:10] im using the manual provider, have bootstrapped a controller, controller is behind a firewall (nat) - can connect to the machine [21:10] however, the client is then trying to download the tools from the controllers internal address instead of its external ip - is there any way to specify this? === agprado|afk is now known as agprado [21:56] admcleod: try adding the public-ip first in .local/share/juju/controllers.yaml for the api-endpoint. [22:00] kwmonroe: hmm yeah its there first.. maybe i should make it the only one [22:02] admcleod: yeah, try that, but don't forget whatever was there before you remove it ;) [22:04] kwmonroe: ha, thanks [22:05] kwmonroe: has something like this worked for you? [22:48] admcleod: yeah, forcing an endpoint ip has worked for me in the distant past (like 2.0 timeframe). it's been months since i've been in that kind of environment though. [22:55] kwmonroe: k cool [22:57] kwmonroe: something is adding the other ip back in automatically [23:13] hrm... admcleod, could you use sshuttle to give your client access to the controller's subnet? sshuttle -r user@firewall a.b.c.d/24 [23:14] s/firewall/nat machine