[00:26] veebers: i left a few comments, let me know if you need further clarification [00:36] wallyworld, LGTM it's much nicer now to ensure our operator app using StatefulSet. Thanks. [00:37] kelvin__: awesome, thank you. we stil use deployment controllers for stateless pods though [00:37] that's my understanding of best practice [00:37] also, the operator pod is still only a single pod, not yet managed [00:38] the workload pods using storage with be stateful sets [00:44] wallyworld, ic, we are using operatorPod() to ensure operator. [00:56] kelvin__: yeah, for now it's just a single pod. but we need to change that [00:59] yeah, i saw the TODO [01:37] wallyworld: sweet will do [02:08] wallyworld: I'm inclined to move a reciever method (member method, unsure of name) that doesn't need any state into a stand alone (i.e. checkFiles(files map[string]string)). Any reason I shouldn't [02:09] wallyworld: note, I'm changing that method from checkFiles(files map[string]string) -> checkFile(files string) as we iterate at a higher level now [02:09] veebers: channelling wallyworld I reckon that sounds great [02:10] babbageclunk: sweet thanks babbageworld (or would that be wallyclunk?) [02:10] ha, I like either of those [02:10] ^_^ [02:11] Mr Babbageworld Wallyclunk. [02:12] Almost as good as Engelbert Humperdinck [02:12] Or Benedict Cumberbatch [02:13] hah, indeed! [02:18] veebers: the only time I don't is when I use the receiver to effectively namespace the function [02:18] thumper: ack, this is unexported helper function. [02:19] then I think it is fine [02:19] which , actually, does use stored state so I'll leave this one, but will do the other helper that doesn't ^_^ [02:29] veebers: sounds ok, i'll look once pushed [02:44] wallyworld: I thought the use of params.DockerInfoResponse in state might be icky. Where is best to create a core struct that will be used internally? [02:44] veebers: in the top level core package [02:44] a subpackage under there [02:45] maybe resources [05:34] wallyworld, kelvin__ : develop doesn't build with the latest commit of charm.v6, are we still waiting for something to land in develop so it will? [05:36] veebers: you sure you ran godeps? [05:36] there were several upstream changes [05:36] * veebers makes sure [05:37] builds for me [05:38] wallyworld: godeps kicks charm.v6 to 0f7685c8 (the commit before kelvin__ merged his bits), but tip for charm.v6 is 6e010c5e0 [05:39] ah, we may be waiting on a juju deps update [05:39] to bring in tip of v6 [05:39] i will have to update a few deps later after bundlechanges done [05:39] veebers: so your branch should leave deps untouched [05:40] kelvin__, wallyworld: ah ack makes sense, all good. [05:40] kelvin__: if you have landed upstream, you can land anytime [05:42] wallyworld, yeah, i will update all them together with bundlechanges upgrade in juju. [05:42] sounds great ty [05:43] hmm i tried .. conjure-up stable/beta/edge and get the same behavior http://pastebin.centos.org/878811/ [05:43] anyone can help me [05:44] ah yes kubernetes canonical install going on [05:45] only the master is not working [05:45] ybaumy: we'll need a lot more info - k8s logs, juju logs, what substrate etc [05:45] im deploying on localhost [05:45] using lxd cloud? [05:45] wallyworld: localhost with lxd [05:46] which bundle? cdk or kubernetes-core? [05:46] canonical kubernetes [05:47] i enabled helm and prometheus [05:47] that's a fairly heavy weight bundle for use on local host - you may be out of resources or something. but hard to say without more info [05:47] my localhost has 8 cores and 24GB ram [05:48] you'll need to look at k8s logs to see why it can't schedule that pod [05:48] k mom phone [05:54] wallyworld: fyi https://github.com/juju/juju/pull/8875 [05:54] looking [05:55] wallyworld: I have a follow up one that adds the values to the model config and updates api server call [05:55] righto [05:56] ybaumy: what does juju status say? [05:59] wallyworld, would u mind to have a quick look on this fix? https://github.com/juju/charm/pull/250/files thanks [05:59] sure [06:00] thumper: juju status says waiting for kube-system pods to start [06:01] and as you can see in the pastebin .. the scheduler cannot start the pods [06:01] ybaumy: can you pastebin the output of juju status? [06:02] http://pastebin.centos.org/878896/ [06:03] on beta there was 1.10.4 i think [06:03] and i had the same problem [06:03] cpu and memory i think are not the problem. the vm is pretty much idle right now [06:05] ybaumy: it looked like the kubernetes-master unit was still setting up [06:05] kubernetes-master/0* waiting idle 9 10.48.163.224 6443/tcp Waiting for kube-system pods to start [06:05] everything else was up and running [06:05] thumper: lgtm [06:05] until that is marked as "active" you probably won't have much luck [06:05] if it has been a long time [06:05] there may be other issues [06:05] wallyworld: any ideas? [06:05] thumper: http://pastebin.centos.org/878811/ this goes on forever [06:05] kelvin__: lgtm, i wondered about that at the time but forgot to ask [06:06] wallyworld, ah. missed that place.. thanks. [06:06] thumper: ybaumy: i have seen the master node take a while to start usually. sometimes a few minutes [06:06] but if it doesn't start you need to look at k8s logs [06:07] Jun 29 05:42:28 juju-161da2-9 systemd[1]: Failed to reset devices.list on /system.slice/calico-node.service: Operation not permitted [06:07] that line looks a little suspicious [06:07] wallyworld: where can i find the k8 logs [06:07] i would try just the kubernetes bundle not the CDk one [06:09] ybaumy: you need to use kubectl to look at the particular artifact that is not working [06:09] ah ok [06:09] kubectl logs -f .... [06:09] the kubernetes bundle has it also those features like integrated helm and elasticsearch? [06:09] and prometheus [06:11] i would start simple by trying the smaller kubernetes-core bundle [06:11] just to ensure that works [06:11] ok [06:13] wallyworld, got another one for bundlechanges, https://github.com/juju/bundlechanges/pull/41/files thanks. [06:13] i wanted to give our team a quick way to deploy a cluster on one vm with alot of features so i thought that canonical bundle would be nice [06:13] kelvin__: looking [06:14] ybaumy: it is nice, but we need to step back to try and see the source of the error [06:14] starting with a smaller bundle can help verify that the basics are all ok [06:15] installing.. will take few minutes.. bbl. need to got to meeting [06:16] and you can still add promethius etc is you need to. but you may not need HA [06:16] ok, i'll be offline soon but other folks will be here [06:16] thanks wallyworld [06:16] we'll get it fixed for you, just may talk a bit of debugging :-) [06:17] kelvin__: lgtm! [06:18] thanks, wallyworld [07:07] hello, anyone could help fixing keystone error? "ERROR Signing error: Unable to load certificate - ensure you have configured PKI with "keystone-manage pki_setup"" [07:08] stickupkid: Hop on the guild HO when you are set up this morning. [08:09] stickupkid: Review? https://github.com/juju/juju/pull/8877 [08:22] manadart: done [08:22] stickupkid: Ta. [08:23] manadart: on da hangout [08:33] stickupkid: Added that test. [08:33] manadart: approved the changes [08:33] stickupkid: Ta. [09:22] manadart: i've removed the remote from the lxd provider, but i'm not sure it's worth the effort removing it from the tools/lxdclient - there is some logic there that i'm unsure is even worth unprising from it [09:26] stickupkid: Sure. Propose it and we can have a look. [09:47] manadart: sorry was manual testing https://github.com/juju/juju/pull/8878 [10:00] how can i remove dead controller from juju ? [10:12] w0jtas: do you have access to the controller at all? [10:21] w0jtas: If the controller machine is dead/gone and kill-controller does not work, you can use "unregister". Note that that just removes it from your local registry and doesn't do resource cleanup. [10:23] manadart: thanks! [10:23] w0jtas: Sure thing. [10:24] after running juju upgrade-charm neutron-gateway i have error on status list: "hook failed: "upgrade-charm"" what can i do here to debug ? [10:27] ok nvmd, deleted setup and starting again [11:07] manadart: so i've removed the rawProvider from my branch now [11:07] stickupkid: Nice. [11:08] Morning party folks [11:08] manadart: so the only thing left is sorting out the lxdclient when connecting to local - i guess the storage and default network [11:09] manadart: i also introduce the server spec, there is a lot of testing to do here :D [11:10] manadart: also i think we should consider some of the internal namings whilst we're here tbh [11:16] stickupkid: HO? [11:16] rick_h_: Morning. [11:16] manadart: yeap [11:17] rick_h_: morning [11:17] * manadart heads to GUILD HO. [11:23] how can i change storage size in juju ? during install i've set 10GB :/ [11:34] w0jtas: check out the root disk size constraint. [11:34] rick_h_ where ? :) [11:34] w0jtas: https://docs.jujucharms.com/2.3/en/reference-constraints [11:35] thanks [11:36] hmm "juju get-model-constraints" doesn't return anything [11:37] in nova-compute log i see "phys_disk=9GB used_disk=0GB" [14:15] hml: From the doc: "juju upgrade-series complete 4". What is the "4"? [14:15] manadart: a machine number [14:16] hml: Ah, got it. Thanks. [14:21] When trying to deploy kubernetes bundle xenial with 16.04-hwe kernel (Bionic still not supported), on maas (juju on bionic, maas on bionic, if that makes a difference), I get this error in juju status : hwe_kernel": ["xenial has no kernels available which meet min_hwe_kernel(ga-18.04) [14:23] rick_h_: that mean anything to you? ^ [14:27] still trying to find out where it comes from (juju or maas ?). my default_min_hwe_kernel is ga-16.04 (from get-config name=default_min_hwe_kernel) can't find where ga-18.04 comes from. [14:28] tvansteenburgh: PatrickD_ that looks like a maas error [14:28] PatrickD_: run juju status --format=yaml [14:28] See if there's any other details in the machine section. [14:29] PatrickD_: might also have to check the debug log/controller for provisioning feedback there. [14:31] same error there... still searching. will look at logs. [14:45] PatrickD_, yes, that's a MAAS error [14:47] check your kernel settings. easiest to do in their web UI [14:47] currently it looks like the minimum kernel is set at 18.04 but you are requesting 16.04 [14:49] there is no "min_hwe_kernel" only "default_min_hwe_kernel", which is right now set at ga-16.04, not even 18.04. [14:50] hml: looks good to me https://github.com/juju/juju/pull/8874 [14:51] stickupkid: ty [14:51] manadart: trying to work out how to test the error messaging from lxd errors is interesting [14:51] hml: Approved #8866, notwithstanding any implementation detail to be discussed with externalreality. [14:52] manadart: ty - [14:56] Looks like I will have to wait for Bionic charms, or for soemone in MAAS to help me with this issue :) Any ETA for Bionic Charms ? [14:57] stickupkid: Approved #8878 with one comment on naming. [15:22] manadart: naming is the hardest thing [15:23] manadart: is officially NewXXX is supposed to returns a pointer and MakeXXX not [15:24] manadart: but I guess that boats sailed... [15:26] stickupkid: Not going to die on that hill. Land it. [15:28] manadart: I remove lxdclient completely and it all works for bootstrapping, i'll land my patch later [15:32] manadart: we broke the error messaging I believe, I'm going to try and reinstate it [15:32] stickupkid: OK, [15:32] manadart: by we, I probably mean me, when I moved to the new client for getting the local address [15:50] PatrickD_: no ETA yet, but it'll be soon. likely next week === mup_ is now known as mup [17:26] does anyone know what is supposed to happen with juju register localhost? [17:26] is that supposed to work [17:28] stickupkid: localhost being a controller name? [17:28] localhost being lxd [17:29] stickupkid: is it supposed to work… juju register requires a controller name or registration string [17:29] just hangs on my end, i'm trying to work out if i've broken anything [17:29] stickupkid: i wouldn’t have expected it to be possible. [17:29] it doesn't work on 2.4 either [17:29] stickupkid: what are you wishing to accomplish with it [17:30] hml: see if it tries to grab any credentials... etc [17:31] stickupkid: you need to boot strap a controller - add a user to it - which provie give the registration string [17:31] s/provie/provide [17:31] hml: thought as much [17:31] hml: cool, well i'm done for the week - have a good weekend :) [17:31] stickupkid: you too!