[08:26] <ybaumy> kubernetes is running on ipv6... but flannel uses ipv4 .. so i have problems with routing
[10:27] <lonroth_scania> Nice SHOW!
[11:58] <Zic> Hi here, one of my 3 kubernetes-master is marked in "error" since this morning, with no special actions from my side, I found that in `juju debug-log`, do you have hints? https://paste.ubuntu.com/p/Kg4pSmGM2p/
[12:03] <Zic> and in /var/log/syslog of this kubernetes-master, I found multiple API requests which results in 404...
[13:41] <Zic> just remove-machine and reinstalled it... seems to be fine at least
[15:09] <kwmonroe> Zic: that's super bizarre.  when the k8s-master restarts the apiserver, it first gets the current status, then sets a restart message and does the restart, and then sets the status back to what it was (https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L920).  the error from your paste suggests the "status_get" returned the string "error", which is not a
[15:09] <kwmonroe> valid workload state. my only guess is that the juju agent on the machine in question is old and maybe had a buggy version of the 'status-get' tool.
[15:11] <kwmonroe> you could run this to find which status-get was getting called on the broken master:  juju run --unit kubernetes-master/X 'locate status-get'
[15:11] <kwmonroe> or ignore it since an add-unit worked for now :)
[15:33] <Zic> kwmonroe: thanks anyway for the explanation ;)
[15:35] <kwmonroe> np
[21:26] <knobby> so I was trying to use the officially blessed nfs charm and it is old and broken. Is there someone working on that? I see a bzr branch that is supposed to fix the broken part, but no movement since January of 2016.
[21:39] <kwmonroe> knobby: the nfs charm should be unblessed.  it was marked unmaintained a million bazillions ago, but it doesn't look like it was ever unpromulgated.
[21:42] <knobby> kwmonroe: so I can just fork it into github and go through the pain of trying to get something useful up? :)
[21:42] <kwmonroe> if by fork, you mean totally rewrite, then yeah
[21:43] <kwmonroe> that thang isn't reactive, doesn't support xenial, and has the letter 'n' as its icon.  we can do a little better ;)
[21:53] <knobby> I get that reactive is the latest craze, but...so what if it isn't reactive? :) the fix for it can be 4 loc or it can be a complete rewrite in reactive and bug fixes. Unless there's a compelling reason to rewrite the entire thing, why bother?
[21:57] <cynthiaoneill> Hi.  I created two kubernetes clusters each had their own controller.  I then deleted all the models from one of the controllers.  juju models, says No selected controller.  How do I access the controller again?
[21:57] <knobby> `juju controllers` will list them and `juju switch my_new_controller`
[21:59] <cynthiaoneill> @knobby - that worked!! thanks.  Do you know how I would then get my kubectl KUBECONFIG working again with the selected cluster?
[22:00] <rick_h_> knobby: whatever works for you
[22:01] <knobby> cynthiaoneill: you can use `juju scp kubernetes-master/0:config ~/.kube/config`
[22:07] <knobby> rick_h_: Is there a compelling reason to write it in reactive? I honestly don't know and just don't want to do it just because it's the soup de jour.
[22:07] <cynthiaoneill> hmmm kubectl is hanging
[22:07] <knobby> juju status looks good?
[22:10] <rick_h_> knobby: well the big reason is to make things more able to be processed in steps when things are ready (like relations) vs the hook execs.
[22:11] <rick_h_> knobby: so I know I find it a bit easier to think of my charm in terms of "installed, configured, db ready, proxy ready"
[22:11] <rick_h_> knobby: but as I say, if you need a 4 line fix don't rewrite it if you're not interested in doing it
[22:11] <rick_h_> it's not a hard rule
[22:12] <tvansteenburgh> cynthiaoneill: are you sure your kubeconfig is pointing to the right place?
[22:14] <tvansteenburgh> cynthiaoneill: also check ls -la `which kubectl` and make sure it's what you expect
[22:23] <cynthiaoneill> it’s strange, kubectl -h works.  I checked and the config file looks right for the master.  It just hangs trying to connect
[22:26] <tvansteenburgh> cynthiaoneill: did you deploy with conjure-up?
[22:27] <cynthiaoneill> yes, conjure-up.  Then I created a second cluster (and controller).  After deleting one of the controllers (and all models) I’m trying to get back to using the first one
[22:28] <tvansteenburgh> cynthiaoneill: ls -la `which kubectl`
[22:28] <tvansteenburgh> what does that show?
[22:28] <cynthiaoneill> cindy@go-node:~$ ls -la `which kubectl`
[22:28] <cynthiaoneill> -rwxrwxr-x 1 cindy cindy 86 Mar 29 21:04 /home/cindy/bin/kubectl
[22:29] <knobby> I think her default config is wrong
[22:29] <tvansteenburgh> cynthiaoneill: cat that file
[22:29] <tvansteenburgh> it'll show you what kubeconfig is being used
[22:30] <cynthiaoneill> YEP!! wrong one. KUBECONFIG=/home/cindy/.kube/config.conjure-canonical-kubern-15d /snap/bin/kubectl $@
[22:30] <cynthiaoneill> wierd, i exported KUBECONFIG myself to try and point to the right one
[22:30] <cynthiaoneill> How do I change that?
[22:31] <tvansteenburgh> you can change that file, or just delete it
[22:35] <cynthiaoneill> Editing the file worked!  Thanks, I thought that was the binary at first!
[22:35] <cynthiaoneill> :)
[22:38] <cynthiaoneill> Do you know if it is possible to add arguments to the canonical-kubernetes charm, so that you can do things like change the machine names, or add ssh keys??
[22:42] <cynthiaoneill> Your answer might be, it’s a code change.  I haven’t looked at the code yet
[22:44] <tvansteenburgh> cynthiaoneill: you might be interested in `juju ssh -h`
[22:45] <tvansteenburgh> so for example, `juju ssh kubernetes-master/0`
[22:46] <cynthiaoneill> I got that working…but when provisioning a cluster - we would like to add private keys to the nodes
[22:46] <cynthiaoneill> also would like names of nodes to be less generic
[22:47] <cynthiaoneill> ability to add a cluster id or name to the node names that appear in AWS for example
[22:50] <tvansteenburgh> you could use `juju run -all` to add keys
[22:51] <tvansteenburgh> not sure if renaming nodes will mess anything up in juju
[22:51] <tvansteenburgh> balloons, do you know? ^
[22:52] <tvansteenburgh> for key management there are other options too, see `juju help commands | grep ssh`
[22:53] <cynthiaoneill> so you can’t change the names for intial creation? (by passing in an arg, or environmental or yaml change?)
[22:53] <cynthiaoneill> The juju run command is pretty cool!
[22:59] <tvansteenburgh> cynthiaoneill: see `juju model-config -h` - you can pass custom cloud-init user data - maybe you could use that to set your hostnames?
[23:01] <tvansteenburgh> although to be honest i think there's probably a better way to achieve what you need, rather than changing the host names
[23:01] <cynthiaoneill> Cool!!  I will check that out.  Thank you :)
[23:02] <tvansteenburgh> shouldn't really care what the machine names are - machines are cattle, not pets! ;)
[23:02] <cynthiaoneill> True!  Just really want the visible AWS instance names to be more readable—we’ll have multiple clusters in a region
[23:03] <cynthiaoneill> We are already having eye strain with just 2 clusters in the same region
[23:04] <cynthiaoneill> I suppose we can use the filter option
[23:08] <tvansteenburgh> cynthiaoneill: you could set the Name that appears in the aws console with `ec2addtag <instance-id> --tag Name=my-fancy-name`
[23:09] <tvansteenburgh> maybe that make things more readable w/o needing to change the hostnames
[23:09] <tvansteenburgh> it is aws-specific though
[23:09] <cynthiaoneill> Where do you run that command?
[23:10] <tvansteenburgh> ec2addtag is part of the aws commandline tools
[23:11] <tvansteenburgh> you could run it from your laptop, or wherever you have your aws creds
[23:14] <knobby> currently kubelet won't let you change the hostname on aws. There is an issue upstream for it
[23:15] <knobby> wait, I might be thinking about node names in k8s