=== petevg_ is now known as petevg === verterok` is now known as verterok === zeus is now known as Guest88902 [08:26] kubernetes is running on ipv6... but flannel uses ipv4 .. so i have problems with routing [10:27] Nice SHOW! [11:58] Hi here, one of my 3 kubernetes-master is marked in "error" since this morning, with no special actions from my side, I found that in `juju debug-log`, do you have hints? https://paste.ubuntu.com/p/Kg4pSmGM2p/ [12:03] and in /var/log/syslog of this kubernetes-master, I found multiple API requests which results in 404... [13:41] just remove-machine and reinstalled it... seems to be fine at least [15:09] Zic: that's super bizarre. when the k8s-master restarts the apiserver, it first gets the current status, then sets a restart message and does the restart, and then sets the status back to what it was (https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L920). the error from your paste suggests the "status_get" returned the string "error", which is not a [15:09] valid workload state. my only guess is that the juju agent on the machine in question is old and maybe had a buggy version of the 'status-get' tool. [15:11] you could run this to find which status-get was getting called on the broken master: juju run --unit kubernetes-master/X 'locate status-get' [15:11] or ignore it since an add-unit worked for now :) [15:33] kwmonroe: thanks anyway for the explanation ;) [15:35] np === frankban is now known as frankban|afk [21:26] so I was trying to use the officially blessed nfs charm and it is old and broken. Is there someone working on that? I see a bzr branch that is supposed to fix the broken part, but no movement since January of 2016. [21:39] knobby: the nfs charm should be unblessed. it was marked unmaintained a million bazillions ago, but it doesn't look like it was ever unpromulgated. [21:42] kwmonroe: so I can just fork it into github and go through the pain of trying to get something useful up? :) [21:42] if by fork, you mean totally rewrite, then yeah [21:43] that thang isn't reactive, doesn't support xenial, and has the letter 'n' as its icon. we can do a little better ;) [21:53] I get that reactive is the latest craze, but...so what if it isn't reactive? :) the fix for it can be 4 loc or it can be a complete rewrite in reactive and bug fixes. Unless there's a compelling reason to rewrite the entire thing, why bother? [21:57] Hi. I created two kubernetes clusters each had their own controller. I then deleted all the models from one of the controllers. juju models, says No selected controller. How do I access the controller again? [21:57] `juju controllers` will list them and `juju switch my_new_controller` [21:59] @knobby - that worked!! thanks. Do you know how I would then get my kubectl KUBECONFIG working again with the selected cluster? [22:00] knobby: whatever works for you [22:01] cynthiaoneill: you can use `juju scp kubernetes-master/0:config ~/.kube/config` [22:07] rick_h_: Is there a compelling reason to write it in reactive? I honestly don't know and just don't want to do it just because it's the soup de jour. [22:07] hmmm kubectl is hanging [22:07] juju status looks good? [22:10] knobby: well the big reason is to make things more able to be processed in steps when things are ready (like relations) vs the hook execs. [22:11] knobby: so I know I find it a bit easier to think of my charm in terms of "installed, configured, db ready, proxy ready" [22:11] knobby: but as I say, if you need a 4 line fix don't rewrite it if you're not interested in doing it [22:11] it's not a hard rule [22:12] cynthiaoneill: are you sure your kubeconfig is pointing to the right place? [22:14] cynthiaoneill: also check ls -la `which kubectl` and make sure it's what you expect [22:23] it’s strange, kubectl -h works. I checked and the config file looks right for the master. It just hangs trying to connect [22:26] cynthiaoneill: did you deploy with conjure-up? [22:27] yes, conjure-up. Then I created a second cluster (and controller). After deleting one of the controllers (and all models) I’m trying to get back to using the first one [22:28] cynthiaoneill: ls -la `which kubectl` [22:28] what does that show? [22:28] cindy@go-node:~$ ls -la `which kubectl` [22:28] -rwxrwxr-x 1 cindy cindy 86 Mar 29 21:04 /home/cindy/bin/kubectl [22:29] I think her default config is wrong [22:29] cynthiaoneill: cat that file [22:29] it'll show you what kubeconfig is being used [22:30] YEP!! wrong one. KUBECONFIG=/home/cindy/.kube/config.conjure-canonical-kubern-15d /snap/bin/kubectl $@ [22:30] wierd, i exported KUBECONFIG myself to try and point to the right one [22:30] How do I change that? [22:31] you can change that file, or just delete it [22:35] Editing the file worked! Thanks, I thought that was the binary at first! [22:35] :) [22:38] Do you know if it is possible to add arguments to the canonical-kubernetes charm, so that you can do things like change the machine names, or add ssh keys?? [22:42] Your answer might be, it’s a code change. I haven’t looked at the code yet [22:44] cynthiaoneill: you might be interested in `juju ssh -h` [22:45] so for example, `juju ssh kubernetes-master/0` [22:46] I got that working…but when provisioning a cluster - we would like to add private keys to the nodes [22:46] also would like names of nodes to be less generic [22:47] ability to add a cluster id or name to the node names that appear in AWS for example [22:50] you could use `juju run -all` to add keys [22:51] not sure if renaming nodes will mess anything up in juju [22:51] balloons, do you know? ^ [22:52] for key management there are other options too, see `juju help commands | grep ssh` [22:53] so you can’t change the names for intial creation? (by passing in an arg, or environmental or yaml change?) [22:53] The juju run command is pretty cool! [22:59] cynthiaoneill: see `juju model-config -h` - you can pass custom cloud-init user data - maybe you could use that to set your hostnames? [23:01] although to be honest i think there's probably a better way to achieve what you need, rather than changing the host names [23:01] Cool!! I will check that out. Thank you :) [23:02] shouldn't really care what the machine names are - machines are cattle, not pets! ;) [23:02] True! Just really want the visible AWS instance names to be more readable—we’ll have multiple clusters in a region [23:03] We are already having eye strain with just 2 clusters in the same region [23:04] I suppose we can use the filter option [23:08] cynthiaoneill: you could set the Name that appears in the aws console with `ec2addtag --tag Name=my-fancy-name` [23:09] maybe that make things more readable w/o needing to change the hostnames [23:09] it is aws-specific though [23:09] Where do you run that command? [23:10] ec2addtag is part of the aws commandline tools [23:11] you could run it from your laptop, or wherever you have your aws creds [23:14] currently kubelet won't let you change the hostname on aws. There is an issue upstream for it [23:15] wait, I might be thinking about node names in k8s