/srv/irclogs.ubuntu.com/2018/01/24/#juju.txt

jose-phillipsquestion00:08
jose-phillips i have a charm locally and i need to redeploy with the changes i do locally00:08
jose-phillipsbut upgrade-charm --path ./path and resolved dont upgrade the charm on the destination00:09
hloeungjose-phillips: maybe upgrade-charm --switch ?02:02
=== Guest34686 is now known as manadart
manadartwindow11:33
manadartOops :)11:33
=== rogpeppe1 is now known as rogpeppe
=== rogpeppe1 is now known as rogpeppe
=== frankban is now known as frankban|afk
=== frankban|afk is now known as frankban
rick_hkwmonroe: you feeling up to the Juju Show today?15:56
kwmonroesure thing rick_h.  i've got some conjure-up/k8s news to share.15:58
rick_hkwmonroe: woot15:58
tychicusHi, I have an issue where my controller is stuck in an error loop, I attempted to destroy kibana, elasticsearch, filebeat, packetbeat, and top beat from the juju-gui16:03
tychicusthe machine-0.log shows an endless stream of the following type of messages16:04
tychicus2018-01-24 16:01:30 ERROR juju.state unit.go:339 cannot delete history for unit "u#filebeat/16#charm": <nil>16:04
tychicus2018-01-24 16:01:30 ERROR juju.state unit.go:339 cannot delete history for unit "u#topbeat/7#charm": <nil>16:04
tychicus2018-01-24 16:01:30 ERROR juju.state unit.go:339 cannot delete history for unit "u#filebeat/13#charm": <nil>16:04
tychicus2018-01-24 16:01:35 ERROR juju.state unit.go:339 cannot delete history for unit "u#packetbeat/13#charm": <ni16:04
tychicusfollowed by16:05
tychicus2018-01-24 16:04:59 ERROR juju.rpc server.go:510 error writing response: write tcp 10.110.0.111:17070->10.110.0.117:34072: write: broken pipe16:05
tychicus2018-01-24 16:04:59 ERROR juju.rpc server.go:510 error writing response: write tcp 10.110.0.111:17070->10.110.0.117:34072: write: broken pipe16:05
tychicus2018-01-24 16:04:59 ERROR juju.rpc server.go:510 error writing response: write tcp 10.110.0.111:17070->10.110.0.117:34072: write: broken pipe16:05
tychicusmongodb is maxing out all available processors, is there anything I can do to reset my controller to a healthy state16:07
=== mlb is now known as mlbiam
mlbiamhello, i'm trinyg to configure a canonical k8s distro with oidc.  I found this medium blog (https://medium.com/@tvansteenburgh/the-canonical-distribution-of-kubernetes-weekly-development-summary-49274b78b5c2) that says its there but we can't find where we configure the api server flags16:32
kwmonroemlbiam: i can't find a reference to that oidc interface in the k8s charms or bundles, so i'm guessing that was more of a foundation that is meant to be exploited later.  i do, however, see an config option for api server flags.  you'd set them like: juju config kubernetes-master api-extra-args="foo=bar ham=burger"16:51
mlbiamkwmonroe - awesome, that confirms what we were thinking16:52
mlbiamhere's my next question, whats the right way to import our OIDC ca cert?16:53
kwmonroemlbiam: i'm not really sure because i typically let the easyrsa charm handle all my CA/cert needs.  Cynerva or ryebot, do you guys know if/how/where you might inject your own ca cert?16:58
mlbiamthis is only the CA for our oidc provider, we're not changing any of kubernetes internal CA information16:59
mlbiam(we're trying to figure out how to set the --oidc-ca-file parameter on the API server)17:00
ryebotah17:00
mlbiamor even better get juju to install our CA cert into the trust store on the api server17:01
ryebotmlbiam if that's all you need, you can set it with the api-extra-args config option on kubernetes-master17:01
mlbiamright, but where do i put the cert so its accessible by the API server?17:01
ryebotof course, you'd need to maintain it on each master node yourself, unfortunately17:01
ryebotmlbiam: anywhere accessible by root should do; apiserver is run by root17:02
mlbiamok, so juju won't over write it?  i see that the easyrsa files are in /root/cdk. would that be as good a place as any?17:03
Cynervait needs to be in a non-hidden folder in /root 'cause it's a confined snap with the home interface17:03
mlbiamgot it17:04
mlbiamdoes juju have any way of updating the os cert store?17:08
mlbiam(ie if i want to distribute a cert to be trusted by multiple servers)17:10
mlbiamthat way i could skip that paraeter entirely17:11
ybaumyhi. is there some good documentation on how to use juju and kubernetes persistent storage somewhere17:21
ybaumyi want to setup nfs shares17:22
ybaumyto use17:22
rick_hkwmonroe: do you have something for ybaumy ? ^17:42
ybaumyrick_h: im still trying to convince ppl that redhat openshift platform well lets put that way falls short of expectations17:46
ybaumyrick_h: but its a long and hard way17:46
rick_hybaumy: heh, always is more work to change minds than to do the thing17:47
ybaumyrick_h: you wont believe .. i did already tech demos and stuff and presentations ... compared both ways to roll out clusters across multicloud providers17:48
ybaumyrick_h: they still think because its redhat it cant be that bad17:48
ybaumy;)17:48
* rick_h whistles to the wind17:51
ybaumysame goes for SuSe CaaS platform17:51
ybaumyif you guys have a piece of documnetation for me i can use it would be nice if you put as PM. i have to eat now and drink beer17:53
knobbyybaumy: Once you get cdk installed you can just use nfs persistant volumes and persistent volume claims. Are you trying to get juju to manage your nfs or something?17:53
ybaumyknobby: yes thats what im trying to accomplish17:54
ybaumyknobby: juju should handle the nfs shares if that is even possible17:54
ybaumyelse show me a way around17:54
ybaumyto scale out17:55
ybaumyfor workers17:55
ybaumyim talking kubernetes17:55
knobbyI'm not sure a follow. Once you setup a pv and pvc you just run pods. There isn't any work outside of getting the nfs server up and reachable by the cluster.17:56
ybaumybut how do i do that initialy when setting up the cluster17:56
knobbyare you trying to put the nfs server on the cluster?17:56
ybaumymaybe i missed the options17:56
knobbyI just set the pvc in the deployment description and specify the mount point inside the pod and magic happens. There isn't anything to do.17:57
knobbyit's all in k8s land17:57
ybaumyok maybe im thinking to complicated17:58
ybaumylet me try that tonight17:59
ybaumyi have an application that needs a nfs volume accross all pods so i thought18:00
ybaumyi could setup the cluster with that nfs volume from the start18:00
ybaumybut then again i dont have the pods when adding a unit18:01
ybaumythough im stupid18:01
knobbyonce you get kubectl you just add the pv with some yaml and then make a pvc for it and then specify the pvc in the deployment. I can try to find a tutorial for that if you want.18:01
knobbyit's not related to juju at all. Juju doesn't know about your nfs or the workloads on k8s18:01
ybaumyknobby: thats what i understand now18:02
ybaumyknobby: sorry for asking stupid questions18:02
knobbyif you have trouble with it, ybaumy, I have a bare metal k8s with nfs beside me and I'm happy to answer questions.18:03
knobbyybaumy: not at all. This stuff is complex and it is so easy to get lost in the trees.18:03
ybaumyknobby: if i cant get it to work i would be happy to get back to you18:03
ybaumylet me ask you guys one last question about scaling pods accross vsphere. one colleague of mine says we should put one pod on one worker on one VM18:04
ybaumyfor performance reasons in vsphere18:05
ybaumywhat do you think of that18:05
ybaumysmall VM's with only one pod ?18:05
ybaumywhy do i need kubernetes then i dont get it18:05
ybaumyhe says vmware guys recommand it that way18:06
knobbyI agree with you. The point of k8s is to help utilize those machines without having to pick your VM size to match your workload. K8s will move things around and make sure things fit while giving guaranteed resources to a pod.18:06
ybaumyknobby: thats what my picture is18:07
ybaumyim trying to understand .. and im falling short18:07
ybaumyon friday some vmware guy will be with us in a meeting i hope he can clarify this. i would like to understand it before but ...18:08
knobbyI can't help you there. You could certainly put a single pod per node, but it seems like an odd choice. You'd spend your life adjusting nodes to scale out.18:09
ybaumyand down... its like a neverending stream of create/delete VM's18:10
ybaumywell i hope he will shed light on this.18:10
ybaumylet me check my beer status and eat something ... then i try the nfs stuff18:12
ybaumybbl18:12
rick_h34min until the Juju Show!!!!! get ready.18:26
rick_hkwmonroe: hml agprado magicaltrout bdx and anyone else that's been holding their breath ^18:27
hmlw00t!18:27
bdxoh yeaaa18:28
* rick_h needs to refill this glass pre-show18:28
agpradorick_h: yep! holding my breath! for my first jujushow!18:41
agpradorick_h, I can't see the hangout link in the calendar for the juju show18:51
rick_hhttps://hangouts.google.com/hangouts/_/ygz6alasjvg4lf3fuurxu4hprye for joining the call and https://www.youtube.com/watch?v=K9x3Xhlnl1s for watching the stream18:53
rick_hbdx: you coming this week?18:59
kwmonroethe cloudinit bug rick_h is referring to is https://bugs.launchpad.net/juju/+bug/153589119:06
mupBug #1535891: Feature request: Custom/user definable cloud-init user-data <cpe-onsite> <juju:Fix Released by hmlanigan> <https://launchpad.net/bugs/1535891>19:06
EdShi Juju people :)19:12
EdSI have just run into a failing update charm operation19:12
EdSI have connected to the affected unit and tailing the log of the juju unit shows some python stuff about "INFO upgrade-charm TypeError: 'NoneType' object is not subscriptable"19:13
EdSI'm trying to update a production k8s cluster and this is a bit of a surprise19:14
EdSthe units affected seem to be sitting in a loop trying to update and repeatedly failing at the same step.19:15
EdSany ideas what I should do?19:15
bdx`juju model-config | grep proxy` - why are these proxy configs not applied to the containers?19:16
bdxper conversation in juju show19:16
bdxshouldn't a container and a machine get the same proxy config if defined at the model level?19:17
rick_hbdx: so they should be on containers but apt caches and such aren't in there I believe19:17
kwmonroemonitoring a k8s cluster: https://medium.com/@kwmonroe/monitor-your-kubernetes-cluster-a856d2603ec319:17
rick_hbdx: http/https proxies are19:17
bdxthe apt proxies aren't then?19:17
rick_hbdx: so I think it's less proxy but more cache19:18
rick_hbdx: or custom repos19:18
bdxahh19:18
bdxI see19:18
EdSany ideas how I kick it to try a new version and forget about failing, please?19:19
kwmonroeconjure-up spell addon repo: https://github.com/conjure-up/spells/tree/master/canonical-kubernetes/addons19:25
rick_hEdS: this is for a charm you're updating?19:37
rick_hEdS: just mark it "juju resolved xxxx --no-retry" so that it will go green but not rerun anything19:38
EdSnot as such - I'm updating our canonical kubernetes setup19:38
rick_hEdS: and then run your upgrade-charm command (if it's a charm you mean)19:38
EdSall I did was (some time ago) deploy https://api.jujucharms.com/charmstore/v5/canonical-kubernetes-117/archive/bundle.yaml19:39
EdSafter editing the charm versions mentioned in my local copy, I have updated the deplyment19:39
EdSI did several minor versions no trouble19:39
EdSnow all my kubernetes worker and master nodes are stuck in a loop with the message: hook failed: "upgrade-charm"19:40
rick_hhml: do you have that bug link again please for the cloud-init stuff for the show notes?19:41
* rick_h hangs head in shame he didn't copy it from the hangout chat19:41
* hml looking19:42
hmlrick_h: https://bugs.launchpad.net/bugs/153589119:43
mupBug #1535891: Feature request: Custom/user definable cloud-init user-data <cpe-onsite> <juju:Fix Released by hmlanigan> <https://launchpad.net/bugs/1535891>19:43
EdSok thanks...19:43
rick_hhml: ty!19:43
hmlrick_h: it only defines the first set of work, that’s already out there19:44
rick_hhml: right, bdx was wondering about that so I wanted to have that link in the notes19:44
hmlnot the replication from machine to container piece19:44
rick_hEdS: sorry, I'm not sure to be honest. You upgrade the charms by doing an upgrade-charm command and editing the yaml file doesn't do anything. So I'm assuming you did an upgrade-charm on each application name and it errored in some way?19:45
rick_hEdS: if so, the folks that work on those charms would want to see what the juju debug-log looks like for those errors and if you're wanting to try again you'd run the "juju resolved xxxx --no-retry" and then the "juju upgrade-charm xxx" as I noted19:45
EdSok thanks very much. I'm trying to work out what happened and agree I'll try and gather some info.19:46
knobbyI'd think the first step is to look at the debug-log and figure out why it doesn't want to upgrade21:04
knobbyEdS: `juju debug-log --replay | pastebinit`21:04
EdSknobby: ok :)21:05
EdSI've carried on working on it. I'm assuming log will be massive by now21:06
knobbyno doubt21:06
knobbyif it is too big for pastebinit, we might have to get to it another way21:06
EdS*Blammo* https://api.jujucharms.com/charmstore/v5/canonical-kubernetes-117/archive/bundle.yaml21:11
EdSah crud21:11
EdSFailed to contact the server: [Errno socket error] [Errno socket error] The read operation timed out21:11
EdSplease ignore link21:11
EdS;)21:11
EdSyeah pastebinit died21:11
knobbytimeout smells like too large of a file to me21:11
EdSyes21:12
knobbymaybe like `juju debug-log --replay|tail -n 10000|pastebinit`21:12
kwmonroeEdS: you can limit the log to just one failing instance with "juju debug-log --include kubernetes-worker/0 --replay" or "--lines 1000" as an alt to --replay.21:12
knobbyor that is even better21:13
kwmonroeknobby: you rascal.. --lines (-n) makes the tail unnecessary.21:13
EdShaha just dumped to a file21:13
EdS36MB21:13
EdS:/21:13
knobbyjust do the --lines 10000 and pastebinit that21:14
knobbyor tail that file to it or something21:14
EdShttp://paste.ubuntu.com/26454287/21:15
EdSline 127-14221:16
EdSI was seeing a lot of that sort of "nonetype is not subscriptable" errors.21:16
EdSdoes "watch juju status" being left in a terminal make a load of log noise anywhere?21:17
kwmonroenah EdS, i'm not aware of juju status increasing log noise anywhere.21:23
EdSphew.21:23
knobbyso there's supposed to be a dictionary in the unit with information like the cluster credentials. Your credentials don't exist.21:26
knobbyinvestigating21:27
EdSinteresting. that may be one stage of the brokenness but I think that was all fine at several points (was a running cluster, was working fine until update failed)21:31
EdS\o/ happy cluster again21:34
EdSI ended up killing the workers but keeping headroom.21:34
EdSnow I have "updated" workers ;)21:34
ryebot\o/21:35
EdSnot pretty but it's late and that was weirdness I would like to be without21:35
EdSthanks for taking a look :) please buzz if you want any more info (but I'm going away soon and will be back tomorrow)21:51
kwmonroeEdS: i opened this to make sure we have required creds in place before doing things like "start_worker":  https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/47421:59
EdSthanks :)22:00
kwmonroenp EdS - thanks for breaking stuff to make it better ;)22:00
EdSI'm still here trying to work out how upgrade nginx too ;)22:00
EdSI want new ingress controller, but cannot figure that -_-22:00
EdShaha no worries. any time.22:01
EdSstill trying to get this bit straight and it's juju related, I'm sure - I wonder if you could help shed some light on it?22:09
EdSusing juju and maas, I set up canonical kubernetes22:10
EdSI have nginx as the ingress controller in kubernetes22:10
EdSit's on the workers22:10
EdSit all got updated22:10
EdSbut nginx version is the same22:10
EdSI don't quite know where it comes from, when setting it up with juju since I'm so far removed from the setup22:10
knobbyis it beta.13?22:20
knobbyEdS: I just added something to the charm to allow changing that. It was hard-coded in the charm to a specific version and we weren't tracking releases well. I updated the version and made it a config option to specify the image to use, but that isn't in stable yet.22:21
EdS:) yep it is NGINX 0.9.0-beta.13 git-949b83e22:22
EdSthis has been driving me nuts, I just didn't know where it was from ;)22:23
knobbyI bumped it to beta.15. Unfortunately it is hard-coded in the charm, so unless you want to fork the charm to update it you'll just have to live with it for a bit longer.22:23
knobbyEdS: the code is around here: https://github.com/kubernetes/kubernetes/blob/cab439b20fbb02cc086bf63b6dd7d51f1908067c/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py#L65822:24
EdSnot to worry - I now know I'm not going crazy...22:28
EdSwhen do you expect that to be available?22:29
EdSthank you22:29
EdSthanks all :) I gotta get some sleep22:37

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!