[00:26] <wallyworld> veebers: i left a few comments, let me know if you need further clarification
[00:36] <kelvin__> wallyworld, LGTM it's much nicer now to ensure our operator app using StatefulSet. Thanks.
[00:37] <wallyworld> kelvin__: awesome, thank you. we stil use deployment controllers for stateless pods though
[00:37] <wallyworld> that's my understanding of best practice
[00:37] <wallyworld> also, the operator pod is still only a single pod, not yet managed
[00:38] <wallyworld> the workload pods using storage with be stateful sets
[00:44] <kelvin__> wallyworld, ic, we are using operatorPod() to ensure operator.
[00:56] <wallyworld> kelvin__: yeah, for now it's just a single pod. but we need to change that
[00:59] <kelvin__> yeah, i saw the TODO
[01:37] <veebers> wallyworld: sweet will do
[02:08] <veebers> wallyworld: I'm inclined to move a reciever method (member method, unsure of name) that doesn't need any state into a stand alone (i.e. checkFiles(files map[string]string)). Any reason I shouldn't
[02:09] <veebers> wallyworld: note, I'm changing that method from checkFiles(files map[string]string) -> checkFile(files string) as we iterate at a higher level now
[02:09] <babbageclunk> veebers: channelling wallyworld I reckon that sounds great
[02:10] <veebers> babbageclunk: sweet thanks babbageworld (or would that be wallyclunk?)
[02:10] <babbageclunk> ha, I like either of those
[02:10] <veebers> ^_^
[02:11] <veebers> Mr Babbageworld Wallyclunk.
[02:12] <babbageclunk> Almost as good as Engelbert Humperdinck
[02:12] <babbageclunk> Or Benedict Cumberbatch
[02:13] <veebers> hah, indeed!
[02:18] <thumper> veebers: the only time I don't is when I use the receiver to effectively namespace the function
[02:18] <veebers> thumper: ack, this is unexported helper function.
[02:19] <thumper> then I think it is fine
[02:19] <veebers> which , actually, does use stored state so I'll leave this one, but will do the other helper that doesn't ^_^
[02:29] <wallyworld> veebers: sounds ok, i'll look once pushed
[02:44] <veebers> wallyworld: I thought the use of params.DockerInfoResponse in state might be icky. Where is best to create a core struct that will be used internally?
[02:44] <wallyworld> veebers: in the top level core package
[02:44] <wallyworld> a subpackage under there
[02:45] <wallyworld> maybe resources
[05:34] <veebers> wallyworld, kelvin__ : develop doesn't build with the latest commit of charm.v6, are we still waiting for something to land in develop so it will?
[05:36] <wallyworld> veebers: you sure you ran godeps?
[05:36] <wallyworld> there were several upstream changes
[05:36]  * veebers makes sure
[05:37] <wallyworld> builds for me
[05:38] <veebers> wallyworld: godeps kicks charm.v6 to 0f7685c8 (the commit before kelvin__ merged his bits), but tip for charm.v6 is 6e010c5e0
[05:39] <wallyworld> ah, we may be waiting on a juju deps update
[05:39] <wallyworld> to bring in tip of v6
[05:39] <kelvin__> i will have to update a few deps later after bundlechanges done
[05:39] <wallyworld> veebers:  so your branch should leave deps untouched
[05:40] <veebers> kelvin__, wallyworld: ah ack makes sense, all good.
[05:40] <wallyworld> kelvin__: if you have landed upstream, you can land anytime
[05:42] <kelvin__> wallyworld, yeah, i will update all them together with bundlechanges upgrade in juju.
[05:42] <wallyworld> sounds great ty
[05:43] <ybaumy> hmm i tried .. conjure-up stable/beta/edge and get the same behavior http://pastebin.centos.org/878811/
[05:43] <ybaumy> anyone can help me
[05:44] <ybaumy> ah yes kubernetes canonical install going on
[05:45] <ybaumy> only the master  is not working
[05:45] <wallyworld> ybaumy: we'll need  a lot more info - k8s logs, juju logs, what substrate etc
[05:45] <ybaumy> im deploying on localhost
[05:45] <wallyworld> using lxd cloud?
[05:45] <ybaumy> wallyworld: localhost with lxd
[05:46] <wallyworld> which bundle? cdk or kubernetes-core?
[05:46] <ybaumy> canonical kubernetes
[05:47] <ybaumy> i enabled helm and prometheus
[05:47] <wallyworld> that's a fairly heavy weight bundle for use on local host - you may be out of resources or something. but hard to say without more info
[05:47] <ybaumy> my localhost has 8 cores and 24GB ram
[05:48] <wallyworld> you'll need to look at k8s logs to see why it can't schedule that pod
[05:48] <ybaumy> k mom phone
[05:54] <thumper> wallyworld: fyi https://github.com/juju/juju/pull/8875
[05:54] <wallyworld> looking
[05:55] <thumper> wallyworld: I have a follow up one that adds the values to the model config and updates api server call
[05:55] <wallyworld> righto
[05:56] <thumper> ybaumy: what does juju status say?
[05:59] <kelvin__> wallyworld, would u mind to have a quick look on this fix? https://github.com/juju/charm/pull/250/files  thanks
[05:59] <wallyworld> sure
[06:00] <ybaumy> thumper: juju status says waiting for kube-system pods to start
[06:01] <ybaumy> and as you can see in the pastebin .. the scheduler cannot start the pods
[06:01] <thumper> ybaumy: can you pastebin the output of juju status?
[06:02] <ybaumy> http://pastebin.centos.org/878896/
[06:03] <ybaumy> on beta there was 1.10.4 i think
[06:03] <ybaumy> and i had the same problem
[06:03] <ybaumy> cpu and memory i think are not the problem. the vm is pretty much idle right now
[06:05] <thumper> ybaumy: it looked like the kubernetes-master unit was still setting up
[06:05] <thumper> kubernetes-master/0*      waiting   idle   9        10.48.163.224   6443/tcp                                 Waiting for kube-system pods to start
[06:05] <thumper> everything else was up and running
[06:05] <wallyworld> thumper: lgtm
[06:05] <thumper> until that is marked as "active" you probably won't have much luck
[06:05] <thumper> if it has been a long time
[06:05] <thumper> there may be other issues
[06:05] <thumper> wallyworld: any ideas?
[06:05] <ybaumy> thumper: http://pastebin.centos.org/878811/ this goes on forever
[06:05] <wallyworld> kelvin__: lgtm, i wondered about that at the time but forgot to ask
[06:06] <kelvin__> wallyworld, ah. missed that place.. thanks.
[06:06] <wallyworld> thumper: ybaumy: i have seen the master node take a while to start usually. sometimes a few minutes
[06:06] <wallyworld> but if it doesn't start you need to look at k8s logs
[06:07] <thumper> Jun 29 05:42:28 juju-161da2-9 systemd[1]: Failed to reset devices.list on /system.slice/calico-node.service: Operation not permitted
[06:07] <thumper> that line looks a little suspicious
[06:07] <ybaumy> wallyworld: where can i find the k8 logs
[06:07] <wallyworld> i would try just the kubernetes bundle not the CDk one
[06:09] <wallyworld> ybaumy: you need to use kubectl to look at the particular artifact that is not working
[06:09] <ybaumy> ah ok
[06:09] <wallyworld> kubectl logs -f ....
[06:09] <ybaumy> the kubernetes bundle has it also those features like integrated helm and elasticsearch?
[06:09] <ybaumy> and prometheus
[06:11] <wallyworld> i would start simple by trying the smaller kubernetes-core bundle
[06:11] <wallyworld> just to ensure that works
[06:11] <ybaumy> ok
[06:13] <kelvin__> wallyworld, got another one for bundlechanges, https://github.com/juju/bundlechanges/pull/41/files thanks.
[06:13] <ybaumy> i wanted to give our team a quick way to deploy a cluster on one vm with alot of features so i thought that canonical bundle would be nice
[06:13] <wallyworld> kelvin__: looking
[06:14] <wallyworld> ybaumy: it is nice, but we need to step back to try and see the source of the error
[06:14] <wallyworld> starting with a smaller bundle can help verify that the basics are all ok
[06:15] <ybaumy> installing.. will take few minutes.. bbl. need to got to meeting
[06:16] <wallyworld> and you can still add promethius etc is you need to. but you may not need HA
[06:16] <wallyworld> ok, i'll be offline soon but other folks will be here
[06:16] <ybaumy> thanks wallyworld
[06:16] <wallyworld> we'll get it fixed for you, just may talk a bit of debugging :-)
[06:17] <wallyworld> kelvin__: lgtm!
[06:18] <kelvin__> thanks, wallyworld
[07:07] <w0jtas> hello, anyone could help fixing keystone error? "ERROR Signing error: Unable to load certificate - ensure you have configured PKI with "keystone-manage pki_setup""
[07:08] <manadart> stickupkid: Hop on the guild HO when you are set up this morning.
[08:09] <manadart> stickupkid: Review? https://github.com/juju/juju/pull/8877
[08:22] <stickupkid> manadart: done
[08:22] <manadart> stickupkid: Ta.
[08:23] <stickupkid> manadart: on da hangout
[08:33] <manadart> stickupkid: Added that test.
[08:33] <stickupkid> manadart: approved the changes
[08:33] <manadart> stickupkid: Ta.
[09:22] <stickupkid> manadart: i've removed the remote from the lxd provider, but i'm not sure it's worth the effort removing it from the tools/lxdclient - there is some logic there that i'm unsure is even worth unprising from it
[09:26] <manadart> stickupkid: Sure. Propose it and we can have a look.
[09:47] <stickupkid> manadart: sorry was manual testing https://github.com/juju/juju/pull/8878
[10:00] <w0jtas> how can i remove dead controller from juju ?
[10:12] <stickupkid> w0jtas: do you have access to the controller at all?
[10:21] <manadart> w0jtas: If the controller machine is dead/gone and kill-controller does not work, you can use "unregister". Note that that just removes it from your local registry and doesn't do resource cleanup.
[10:23] <w0jtas> manadart: thanks!
[10:23] <manadart> w0jtas: Sure thing.
[10:24] <w0jtas> after running juju upgrade-charm neutron-gateway i have error on status list: "hook failed: "upgrade-charm"" what can i do here to debug ?
[10:27] <w0jtas> ok nvmd, deleted setup and starting again
[11:07] <stickupkid> manadart: so i've removed the rawProvider from my branch now
[11:07] <manadart> stickupkid: Nice.
[11:08] <rick_h_> Morning party folks
[11:08] <stickupkid> manadart: so the only thing left is sorting out the lxdclient when connecting to local - i guess the storage and default network
[11:09] <stickupkid> manadart: i also introduce the server spec, there is a lot of testing to do here :D
[11:10] <stickupkid> manadart: also i think we should consider some of the internal namings whilst we're here tbh
[11:16] <manadart> stickupkid: HO?
[11:16] <manadart> rick_h_: Morning.
[11:16] <stickupkid> manadart: yeap
[11:17] <stickupkid> rick_h_: morning
[11:17]  * manadart heads to GUILD HO.
[11:23] <w0jtas> how can i change storage size in juju ? during install i've set 10GB :/
[11:34] <rick_h_> w0jtas: check out the root disk size constraint.
[11:34] <w0jtas> rick_h_ where ? :)
[11:34] <rick_h_> w0jtas: https://docs.jujucharms.com/2.3/en/reference-constraints
[11:35] <w0jtas> thanks
[11:36] <w0jtas> hmm "juju get-model-constraints" doesn't return anything
[11:37] <w0jtas> in nova-compute log i see "phys_disk=9GB used_disk=0GB"
[14:15] <manadart> hml: From the doc: "juju upgrade-series complete 4". What is the "4"?
[14:15] <hml> manadart: a machine number
[14:16] <manadart> hml: Ah, got it. Thanks.
[14:21] <PatrickD_> When trying to deploy kubernetes bundle xenial with 16.04-hwe kernel (Bionic still not supported), on maas (juju on bionic, maas on bionic, if that makes a difference), I get this error in juju status : hwe_kernel": ["xenial has no kernels available which meet min_hwe_kernel(ga-18.04)
[14:23] <tvansteenburgh> rick_h_: that mean anything to you? ^
[14:27] <PatrickD_> still trying to find out where it comes from (juju or maas ?). my default_min_hwe_kernel is ga-16.04 (from get-config name=default_min_hwe_kernel) can't find where ga-18.04 comes from.
[14:28] <rick_h_> tvansteenburgh: PatrickD_ that looks like a maas error
[14:28] <rick_h_> PatrickD_: run juju status --format=yaml
[14:28] <rick_h_> See if there's any other details in the machine section.
[14:29] <rick_h_> PatrickD_: might also have to check the debug log/controller for provisioning feedback there.
[14:31] <PatrickD_> same error there... still searching. will look at logs.
[14:45] <pmatulis> PatrickD_, yes, that's a MAAS error
[14:47] <pmatulis> check your kernel settings. easiest to do in their web UI
[14:47] <pmatulis> currently it looks like the minimum kernel is set at 18.04 but you are requesting 16.04
[14:49] <PatrickD_> there is no "min_hwe_kernel" only "default_min_hwe_kernel", which is right now set at ga-16.04, not even 18.04.
[14:50] <stickupkid> hml: looks good to me https://github.com/juju/juju/pull/8874
[14:51] <hml> stickupkid: ty
[14:51] <stickupkid> manadart: trying to work out how to test the error messaging from lxd errors is interesting
[14:51] <manadart> hml: Approved #8866, notwithstanding any implementation detail to be discussed with externalreality.
[14:52] <hml> manadart: ty -
[14:56] <PatrickD_> Looks like I will have to wait for Bionic charms, or for soemone in MAAS to help me with this issue :) Any ETA for Bionic Charms ?
[14:57] <manadart> stickupkid: Approved #8878 with one comment on naming.
[15:22] <stickupkid> manadart: naming is the hardest thing
[15:23] <stickupkid> manadart: is officially NewXXX is supposed to returns a pointer and MakeXXX not
[15:24] <stickupkid> manadart: but I guess that boats sailed...
[15:26] <manadart> stickupkid: Not going to die on that hill. Land it.
[15:28] <stickupkid> manadart: I remove lxdclient completely and it all works for bootstrapping, i'll land my patch later
[15:32] <stickupkid> manadart: we broke the error messaging I believe, I'm going to try and reinstate it
[15:32] <manadart> stickupkid: OK,
[15:32] <stickupkid> manadart: by we, I probably mean me, when I moved to the new client for getting the local address
[15:50] <tvansteenburgh> PatrickD_: no ETA yet, but it'll be soon. likely next week
[17:26] <stickupkid> does anyone know what is supposed to happen with juju register localhost?
[17:26] <stickupkid> is that supposed to work
[17:28] <hml> stickupkid:  localhost being a controller name?
[17:28] <stickupkid> localhost being lxd
[17:29] <hml> stickupkid: is it supposed to work… juju register requires a controller name or registration string
[17:29] <stickupkid> just hangs on my end, i'm trying to work out if i've broken anything
[17:29] <hml> stickupkid: i wouldn’t have expected it to be possible.
[17:29] <stickupkid> it doesn't work on 2.4 either
[17:29] <hml> stickupkid: what are you wishing to accomplish with it
[17:30] <stickupkid> hml: see if it tries to grab any credentials... etc
[17:31] <hml> stickupkid: you need to boot strap a controller - add a user to it - which provie give the registration string
[17:31] <hml> s/provie/provide
[17:31] <stickupkid> hml: thought as much
[17:31] <stickupkid> hml: cool, well i'm done for the week - have a good weekend :)
[17:31] <hml> stickupkid: you too!