/srv/irclogs.ubuntu.com/2018/03/29/#juju.txt

=== petevg_ is now known as petevg
=== verterok` is now known as verterok
=== zeus is now known as Guest88902
ybaumykubernetes is running on ipv6... but flannel uses ipv4 .. so i have problems with routing08:26
lonroth_scaniaNice SHOW!10:27
ZicHi here, one of my 3 kubernetes-master is marked in "error" since this morning, with no special actions from my side, I found that in `juju debug-log`, do you have hints? https://paste.ubuntu.com/p/Kg4pSmGM2p/11:58
Zicand in /var/log/syslog of this kubernetes-master, I found multiple API requests which results in 404...12:03
Zicjust remove-machine and reinstalled it... seems to be fine at least13:41
kwmonroeZic: that's super bizarre.  when the k8s-master restarts the apiserver, it first gets the current status, then sets a restart message and does the restart, and then sets the status back to what it was (https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L920).  the error from your paste suggests the "status_get" returned the string "error", which is not a15:09
kwmonroevalid workload state. my only guess is that the juju agent on the machine in question is old and maybe had a buggy version of the 'status-get' tool.15:09
kwmonroeyou could run this to find which status-get was getting called on the broken master:  juju run --unit kubernetes-master/X 'locate status-get'15:11
kwmonroeor ignore it since an add-unit worked for now :)15:11
Zickwmonroe: thanks anyway for the explanation ;)15:33
kwmonroenp15:35
=== frankban is now known as frankban|afk
knobbyso I was trying to use the officially blessed nfs charm and it is old and broken. Is there someone working on that? I see a bzr branch that is supposed to fix the broken part, but no movement since January of 2016.21:26
kwmonroeknobby: the nfs charm should be unblessed.  it was marked unmaintained a million bazillions ago, but it doesn't look like it was ever unpromulgated.21:39
knobbykwmonroe: so I can just fork it into github and go through the pain of trying to get something useful up? :)21:42
kwmonroeif by fork, you mean totally rewrite, then yeah21:42
kwmonroethat thang isn't reactive, doesn't support xenial, and has the letter 'n' as its icon.  we can do a little better ;)21:43
knobbyI get that reactive is the latest craze, but...so what if it isn't reactive? :) the fix for it can be 4 loc or it can be a complete rewrite in reactive and bug fixes. Unless there's a compelling reason to rewrite the entire thing, why bother?21:53
cynthiaoneillHi.  I created two kubernetes clusters each had their own controller.  I then deleted all the models from one of the controllers.  juju models, says No selected controller.  How do I access the controller again?21:57
knobby`juju controllers` will list them and `juju switch my_new_controller`21:57
cynthiaoneill@knobby - that worked!! thanks.  Do you know how I would then get my kubectl KUBECONFIG working again with the selected cluster?21:59
rick_h_knobby: whatever works for you22:00
knobbycynthiaoneill: you can use `juju scp kubernetes-master/0:config ~/.kube/config`22:01
knobbyrick_h_: Is there a compelling reason to write it in reactive? I honestly don't know and just don't want to do it just because it's the soup de jour.22:07
cynthiaoneillhmmm kubectl is hanging22:07
knobbyjuju status looks good?22:07
rick_h_knobby: well the big reason is to make things more able to be processed in steps when things are ready (like relations) vs the hook execs.22:10
rick_h_knobby: so I know I find it a bit easier to think of my charm in terms of "installed, configured, db ready, proxy ready"22:11
rick_h_knobby: but as I say, if you need a 4 line fix don't rewrite it if you're not interested in doing it22:11
rick_h_it's not a hard rule22:11
tvansteenburghcynthiaoneill: are you sure your kubeconfig is pointing to the right place?22:12
tvansteenburghcynthiaoneill: also check ls -la `which kubectl` and make sure it's what you expect22:14
cynthiaoneillit’s strange, kubectl -h works.  I checked and the config file looks right for the master.  It just hangs trying to connect22:23
tvansteenburghcynthiaoneill: did you deploy with conjure-up?22:26
cynthiaoneillyes, conjure-up.  Then I created a second cluster (and controller).  After deleting one of the controllers (and all models) I’m trying to get back to using the first one22:27
tvansteenburghcynthiaoneill: ls -la `which kubectl`22:28
tvansteenburghwhat does that show?22:28
cynthiaoneillcindy@go-node:~$ ls -la `which kubectl`22:28
cynthiaoneill-rwxrwxr-x 1 cindy cindy 86 Mar 29 21:04 /home/cindy/bin/kubectl22:28
knobbyI think her default config is wrong22:29
tvansteenburghcynthiaoneill: cat that file22:29
tvansteenburghit'll show you what kubeconfig is being used22:29
cynthiaoneillYEP!! wrong one. KUBECONFIG=/home/cindy/.kube/config.conjure-canonical-kubern-15d /snap/bin/kubectl $@22:30
cynthiaoneillwierd, i exported KUBECONFIG myself to try and point to the right one22:30
cynthiaoneillHow do I change that?22:30
tvansteenburghyou can change that file, or just delete it22:31
cynthiaoneillEditing the file worked!  Thanks, I thought that was the binary at first!22:35
cynthiaoneill:)22:35
cynthiaoneillDo you know if it is possible to add arguments to the canonical-kubernetes charm, so that you can do things like change the machine names, or add ssh keys??22:38
cynthiaoneillYour answer might be, it’s a code change.  I haven’t looked at the code yet22:42
tvansteenburghcynthiaoneill: you might be interested in `juju ssh -h`22:44
tvansteenburghso for example, `juju ssh kubernetes-master/0`22:45
cynthiaoneillI got that working…but when provisioning a cluster - we would like to add private keys to the nodes22:46
cynthiaoneillalso would like names of nodes to be less generic22:46
cynthiaoneillability to add a cluster id or name to the node names that appear in AWS for example22:47
tvansteenburghyou could use `juju run -all` to add keys22:50
tvansteenburghnot sure if renaming nodes will mess anything up in juju22:51
tvansteenburghballoons, do you know? ^22:51
tvansteenburghfor key management there are other options too, see `juju help commands | grep ssh`22:52
cynthiaoneillso you can’t change the names for intial creation? (by passing in an arg, or environmental or yaml change?)22:53
cynthiaoneillThe juju run command is pretty cool!22:53
tvansteenburghcynthiaoneill: see `juju model-config -h` - you can pass custom cloud-init user data - maybe you could use that to set your hostnames?22:59
tvansteenburghalthough to be honest i think there's probably a better way to achieve what you need, rather than changing the host names23:01
cynthiaoneillCool!!  I will check that out.  Thank you :)23:01
tvansteenburghshouldn't really care what the machine names are - machines are cattle, not pets! ;)23:02
cynthiaoneillTrue!  Just really want the visible AWS instance names to be more readable—we’ll have multiple clusters in a region23:02
cynthiaoneillWe are already having eye strain with just 2 clusters in the same region23:03
cynthiaoneillI suppose we can use the filter option23:04
tvansteenburghcynthiaoneill: you could set the Name that appears in the aws console with `ec2addtag <instance-id> --tag Name=my-fancy-name`23:08
tvansteenburghmaybe that make things more readable w/o needing to change the hostnames23:09
tvansteenburghit is aws-specific though23:09
cynthiaoneillWhere do you run that command?23:09
tvansteenburghec2addtag is part of the aws commandline tools23:10
tvansteenburghyou could run it from your laptop, or wherever you have your aws creds23:11
knobbycurrently kubelet won't let you change the hostname on aws. There is an issue upstream for it23:14
knobbywait, I might be thinking about node names in k8s23:15

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!