[02:57] <[Kid]> if i am using MAAS as a cloud provider and doing a juju deploy with a bundle YAML, why would it not spin up all the machines it needs? i.e. i have 8 machines available and "Ready" state in MAAS, but it only allocates 4 and then says pending on the others.
[02:57] <[Kid]> no constriants either
[02:59] <bdx> [Kid]: if you are deploying > 1 of an application juju will try to spread out the unit across zones
[03:00] <bdx> if all your nodes are in the same zone
[03:00] <bdx> it can cause issues
[03:00] <bdx> if all nodes in the same zone
[03:03] <bdx> [Kid]: http://paste.ubuntu.com/25905242/ <- what I do to get multiple units of something in the same zone
[03:03] <bdx> I feel like its kind of a hack though
[03:03] <bdx> I think others do too
[05:21] <siraj> unable to remove a service using juju remove-service <service-name> or by juju remove-unit or even removing a machine by juju remove-machine command. Even i deleted container still juju show-status display associated machine, application
[12:44] <Dwellr> morning all =)
[13:23] <Dwellr> anyone know a little more about "snap set kube-apiserver authorization-mode=RBAC" ?
[14:25] <Dwellr> aha.. figured out I needed to juju ssh kubernetes-master/0 -- 'sudo service snap.kube-apiserver.daemon restart' before the RBAC roles would show up
[14:35] <rick_h> Dwellr: ah cool stuff
[14:37] <Dwellr> aye.. are there any docs for the options for set kube-apiserver ? I'm wondering next how to enable initializers
[14:37] <tvansteenburgh> Dwellr: fyi, the bundle on the candidate channel allows you to enable RBAC via charm config, ala: juju config kubernetes-master authorization-mode=Node,RBAC
[14:38] <Dwellr> Yeah.. I spotted that =) I look forward to it when it's released =)
[14:38] <tvansteenburgh> cool
[14:38] <tvansteenburgh> Dwellr: are you asking which initializers are available, or how to turn them on?
[14:38] <Dwellr> I couldn't figure out last night how to tell my conjure up to use different bundles tho when installing
[14:38] <Dwellr> asking how to turn on initializers for kubernetes.. I'm trying to setup Istio, and it needs them apparently.. (they may already be on for all I know atm)
[14:39] <Dwellr> https://istio.io/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection
[14:40] <tvansteenburgh> Dwellr: i think they are on already...
[14:40] <Dwellr> oooh.. now that would be awesome =) I'll try when my vm comes back up =) just wiped it clean again
[15:03] <[Kid]> bdx, you around?
[15:04] <[Kid]> follow up on a question i had yesterday about juju not deploying across all MAAS nodes.
[15:05] <[Kid]> if it is a kubernetes cluster with etcd, flannel, etc., would it be good to put etcd, flannel and the others into separate zones?
[15:42] <bdx> [Kid]: you can do whatever you want, I don't see any advantage to using zones in maas unless you are going to use them like availability zones and not logical groupings
[16:31] <Dwellr> is it just me.. or is `conjure-up kube` not completing anymore? I'm seeing [info] Waiting for deployment to settle (this is normal) but then [error] Some applications failed to start successfully and then [warning] Shutting down ... but kube IS running, although it fails to create the .kube dir with the config file etc
[20:23] <[Kid]> bdx, i thought that's what you were referring to yesterday when it wasn't using all of the nodes that were available
[20:44] <bdx> toatlly, I was
[20:45] <[Kid]> oh, so i do need to do separate zones?
[20:45] <[Kid]> for each role
[20:45] <[Kid]> then do a juju deploy for each role?
[20:45] <bdx> [Kid]: I've just experienced the same learning curve there, yeah, that should work
[20:45] <[Kid]> won't that cause the k8 cluster not to communicate correctly?
[20:46] <bdx> the zones are just metadata representation of physical groupings
[20:46] <bdx> n[Kid]: no
[20:58] <[Kid]> ahh ok
[20:58] <[Kid]> i will try that
[20:58] <[Kid]> is there a way to put constraints for tags in a bundle YAML?
[21:00] <cory_fu> Dwellr: Sorry for the late reply, but we're looking into that issue here: https://github.com/conjure-up/conjure-up/issues/1218  Can you let us know what version of conjure-up you're using? (Are you using the edge channel of the snap?  Which revision?)
[21:08] <Dwellr> snap install conjure-up --classic --edge
[21:09] <zeestrat> [Kid]: For MAAS? Just use tags in the constraints
[21:09] <[Kid]> zeestrat, the tags constraint doesn't seem to work in a bundle YAML
[21:09] <[Kid]> the cpu, mem, etc. do though
[21:09] <Dwellr> $conjure-up --version
[21:10] <Dwellr> conjure-up 2.5-alpha1
[21:10] <Dwellr> corey_fu: ^^
[21:10] <[Kid]> not sure if it is related to this: https://bugs.launchpad.net/juju/+bug/1554120
[21:10] <mup> Bug #1554120: juju 2.0 bundle support: Missing constraint support for maas names to support bundle placement <conjure-up> <juju-release-support> <oil> <oil-2.0> <uosci> <juju:Triaged> <https://launchpad.net/bugs/1554120>
[21:11] <[Kid]> the gist i get from that is that maas tags aren't supported as to make the bundle YAMLs community-driven, i.e. that wouldn't make them as shareable?
[21:14] <cory_fu> Dwellr: Can you check the file ~/.cache/conjure-up/canonical-kubernetes/deploy-wait.err and see if it has any contents?
[21:15] <zeestrat> [Kid]: That bug is for maas names. We're using tags fine in 2.1.3 at least. Could I see the bundle?
[21:15] <[Kid]> zeestrat, sure
[21:16] <Dwellr> cory_fu: I'm using kubernetes-core, not canonical-kubernetes, and my ~/.cache/conjure-up/kubernetes-core/deploy-wait.err is empty
[21:17] <[Kid]> zeestrat: https://pastebin.com/HPHnfhdp
[21:17] <cory_fu> Dwellr: Hrm.  That matches the issue report, but I'm really unsure why that file would be empty.
[21:17] <[Kid]> also, the other weird thing is that it doesn't release 2 of the nodes in MAAS when i tear down the model
[21:18] <Dwellr> cory_fu: well.. deploy-wait.out is 0 bytes too
[21:18] <Dwellr> (as are conjure-up*bootstrap.(err|out)"
[21:18] <cory_fu> Dwellr: Did you do a fresh bootstrap for that deployment?
[21:19] <cory_fu> Dwellr: Well, there's definitely something wrong with the writing of those log files.
[21:20] <Dwellr> It was a brand new vm.. I basically do    apt-get remove -qyf lxd lxd-client && snap install lxd && snap install conjure-up --classic --edge && snap install kubectl --classic && conjure-up kubernetes-core localhost
[21:21] <Dwellr> (a few other commands omitted for brevity, but basically that's it, just ditch the supplied lxd, install from snap, then run conjure-up to create the k8s
[21:21] <cory_fu> Dwellr: Ok, thanks.  stokachu ^  Looks like there's an issue with the sub-log files across the board.  Maybe something with arun?
[21:21] <Dwellr> fwiw, it used to work ok..
[21:22] <stokachu> cory_fu: Yea I just looked at another deploy and the bootstrap log is empty
[21:22] <kwmonroe> cory_fu: is  ~/.cache/conjure-up/kubernetes-core/deploy-wait.err still the right location?  i thought you guys wrote logs to $SNAP_USER_DATA.
[21:22] <Dwellr> 25days ago I know it worked fine, and 13days ago it probably worked..
[21:25] <Dwellr> and it appears initializers aren't enabled in the k8s from kubernetes-core.. I guess I need to go figure out how to enable them =) "kubectl api-versions | grep admissionregistration" is empty
[21:25] <[Kid]> zeestrat: does that look wrong?
[21:25] <Dwellr> https://kubernetes.io/docs/admin/extensible-admission-controllers/#external-admission-webhooks
[21:26] <zeestrat> [Kid]: Thanks. Initial guess is that juju is getting a bit confused because of the mixing of machine placements (i.e. to) and constraints for the same charm. We use constraints/tags in the machine section for charms that we need to colocate. For other charms we just use the constraints section in the charm. Let me see if I find an example
[21:28] <[Kid]> hmmm that makes sense
[21:30] <kwmonroe> nm cory_fu, i see that .cache/c-u is the right location.  i thought for some reason that switched recently.
[21:31] <zeestrat> [Kid]: rick_h and the devs would be the judge to determine if it's correct behavior from juju, but here's an excerpt from our OpenStack bundle: https://pastebin.com/UZPRLp9z
[21:31] <tvansteenburgh> Dwellr: these are the admission controls we enable by default: https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L1081
[21:32] <tvansteenburgh> Dwellr: to modify api-server args, use the api-extra-args charm config on kubernetes-master
[21:33] <zeestrat> [Kid]: Note, I see we also have a constraints section for the ceph-radosgw charm in my example, but I'm pretty sure that's unecessary/typo
[21:34] <Dwellr> like... juju config kubernetes-master api-extra-args="--admission-control --runtime-config=admissionregistration.k8s.io/v1alpha1"
[21:36] <Dwellr> ah.. figuring it out slowly
[21:37] <tvansteenburgh> Dwellr: do `juju config kubernetes-master` - there's some help text for each option
[21:37] <tvansteenburgh> explains the format for api-extra-args
[21:37] <Dwellr> aye.. is it empty by default ?
[21:38] <tvansteenburgh> ye
[21:38] <tvansteenburgh> s
[21:38] <Dwellr> awesome.. no need to worry about merging then =)
[21:38] <tvansteenburgh> Dwellr: whatever you specify will be added to the defaults
[21:40] <[Kid]> zeestrat. thank you. that helps a lot
[21:41] <[Kid]> if i do the containers/charmname, that should create a lxd container on the node, right?
[21:41] <[Kid]> i see you specified lxd in your config
[21:42] <Dwellr> Cool.. I _think_ I just got Istio with initializers to install into my k8s =)
[21:46] <zeestrat> Yeah, using the lxd:<machine-number> syntax will create and place the application in a lxd container on a machine. See also https://jujucharms.com/docs/devel/charms-bundles
[21:47] <[Kid]> zeestrat, also, can i do a cs:~containers/easyrsa and just get the latest charm?
[21:47] <[Kid]> or do i have to specify the revision?
[21:47] <zeestrat> Jupp
[21:48] <[Kid]> yeah i have read that page, but it didn
[21:48] <[Kid]> t show tags as constraints
[21:49] <zeestrat> Yeah, you're quite right. Feel free to put in a bug, otherwise I'll do it tomorrow. You are correct that it probably didn't get highlighted as maas is the only cloud supporting that.
[21:50] <[Kid]> yes, it does say that on the constraints page
[21:50] <[Kid]> i was just saying in the bundle YAML it didn't seem like it supported that, but i will try it again
[21:52] <zeestrat> I'm off for tonight, but make some noise if you don't get anywhere with the bundle syntax and I'm sure someone can sort you out.
[21:52] <[Kid]> real quick
[21:52] <[Kid]> one more ?
[21:53] <zeestrat> Fire away
[21:53] <[Kid]> i don't need the to: anymore if i use the tags, right?
[21:53] <[Kid]> and you said yup to both my questions earlier, not just the last one, right?
[21:54] <[Kid]> about the cs:~container and the one asking about without the revision number
[21:54] <[Kid]> sorry for all the questions, i have been messing with this for a few days now
[21:56] <zeestrat> Sorry, jupp to dropping rev for latest. Just keep in mind latest and greatest, bleeding edge and all. Some folks prefer to follow published and tested bundles but that depends wholly on you.
[21:57] <[Kid]> oh yeah
[21:57] <rick_h> zeestrat: [Kid] so juju won't listen to constraints if you target with --to
[21:57] <[Kid]> rick, well that is VERY good informaqtion
[21:57] <[Kid]> thank you
[21:57] <[Kid]> zeestrat, thats what you said earlier
[21:57] <rick_h> yea, because if they conflict who wins? If you say "put it there" then you know what you're doing :)
[21:57] <[Kid]> ahhh
[21:58] <[Kid]> makes sense
[21:58] <zeestrat> You need to: if you want to colocate applications on the same hosts, f.eks multiple controller applications in lxd containers on the same host
[21:58] <[Kid]> rick, so cs:~container/charmname will put the charm in a container on the node,right?
[21:59] <[Kid]> yeah, zeestrat, that's what i want
[21:59] <[Kid]> so i guess i constrain in the machine section
[21:59] <[Kid]> then use to: in the application section
[21:59] <rick_h> [Kid]: no, if it's container or not has nothing to do with charm url. That's actually a username/team name
[21:59] <[Kid]> ahhh
[21:59] <[Kid]> ok
[21:59] <rick_h> [Kid]: exactly
[21:59] <zeestrat> Very misleading namespace :p
[21:59] <[Kid]> hahah yeah it is
[21:59] <rick_h> target in the application --to lxd:0 or the like and then put constraints on machine 0 to get the right sized vm
[22:00] <[Kid]> that's where i have to do to: lxd:1
[22:00] <[Kid]> yeah
[22:00] <[Kid]> yes, thank you!! i have some things to try now
[22:00] <[Kid]> this makes it much more enjoyable to get past a wall
[22:00] <[Kid]> and actually start building an application
[22:00] <[Kid]> haha
[22:02] <zeestrat> Finally, a word of advice in case you got different networks setup in maas. You'll probably have to use bindings (from the bundle docs) to make sure the lxd containers get a bridge interface to the correct network.