[07:10] Good morning Juju world! [07:46] Hello there [07:46] I have an environment where i have juju in HA using 3 nodes [07:47] unfortunally i need to remove 2 of those nodes now because of maintenance [07:47] how can i remove those nodes manually and afterwards just create new ones? [07:48] can i juse do juju remove-machine on the controller model? [07:53] Hi BlackDex, based on this: https://jujucharms.com/docs/2.1/controllers-ha [07:54] BlackDex: "The only way to decrease is to create a backup of your environment and then restore the backup to a new environment, which starts with a single controller. You can then increase to the desired number." [07:54] BlackDex: never done that [09:14] kjackal: Well, one controller is still a controller right [09:14] ill first create a backup, and remove the 2 machines [09:28] hmm shutting down the 2 nodes prevents me from using `juju` at all [09:28] thats not really ha then :) [12:56] o/ kjackal /buffer 2 [12:56] hello lazyPower [12:57] what's this code language "/buffer 2" [12:57] ? [12:57] lazyPower: ^ [12:58] lol sorry [12:58] early morning, weechat buffer switching [13:25] hello o/ , where can I find the snapcraft recipe used to build the 'charm' snap? === stormmore is now known as Budgie^Smore [14:46] hi here [14:48] My CDK cluster running fully@AWS cannot manipulate PV as EBS, I saw on GitHub that it's maybe tied to "CLOUD_PROVIDER" var, do you have any documentation about it? [14:50] Zic: we haven't released cloud integration points yet. We're still in a bare-metal first representation of k8s. Work has been started to enable that in a juju native way, but its still WIP [14:51] thanks for the info :) [14:51] lazyPower: do you have any recommendation for PersistentVolume @ AWS so? just a manually created EBS ? [14:51] (or EFS) [14:52] Zic: i've been using EFS with great success [14:52] but its been only via testing, and i use the NFS volume type [14:52] so ymmv - i haven't done battle-hardened testing here, just exploratory work [14:54] lazyPower: is "cloud integration" just control the fact that CDK cluster can auto-create EBS on-demand, or also the fact that K8s can mount EBS volume? [14:54] because I just tried to create the EBS manually and mount it [14:54] -> found invalid field AWSElasticBlockStore for v1.PersistentVolumeSpec [14:54] Zic: no, this would be for cloud level integrations. Giving your nodes IAM profiles to request things from the cloud on your behalf [14:55] ebs volumes, ELB load balancers, et-al [14:55] ok so I can normally mount manually-created EBS, I need to find why it did not work :( [14:57] I don't need my cluster to auto-create EBS on-demand, I just need the ability to mount them as PersistentVolume :D [15:07] lazyPower: so just to be sure, you confirm that I normally can use this https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore, but I need to create EBS volume manually (as it's precised in the link) before? [15:08] or mounting EBS is also part of cloud-integration and I need to stick with EFS [15:08] (EFS is cool, but not for large micro-access) [15:19] an other solution will be to add "--cloud-provider=aws" to /etc/default/kube-apiserver and kube-controller-manager on master side, and to /etc/default/kubelet node-side [15:19] but Juju will erase this extra-parameter === dpb1_ is now known as dpb1 === narinder is now known as narinder_out [17:06] Zic: you can also place that in the templates in /var/lib/juju/agents/unit-kubernetes-master-#/charm/templates. But note that once you modify those templates you're making a snowflake. [17:07] Zic: but if you strawman out what you're looking for, and propose it, we can certainly review the contribution and help get it landed. === lazyPower is now known as lp|Kubecon [19:18] petevg: cory_fu: curious about the matrix... http://paste.ubuntu.com/24262725/ is my status output for gce and aws. you see the core count for those matrix models? it's as if they deployed with no machine constraints. contrast that with the non-matrix models in the same job. [19:18] petevg: cory_fu: i verified that a sample gce matrix unit did in fact have 1 cpu and 2gb ram. that's not in accordance with any of the constraints set in my bundle. [19:18] kwmonroe: matrix can spin up additional machines, remember. [19:19] kwmonroe: but there might still be stuff missing when translating constraints in python-libjuju. [19:20] ok petevg, so this isn't expected for the matrix first-pass (vs chaos), right? as in, i should expect my bundle constraints to be honored by matrix? [19:20] That section of the websocket api is kind of broken. (The websocket api doesn't like parsing the plan that it generates.) [19:20] kwmonroe: yes. On the first pass, matrix should honor your constraints. [19:21] thx petevg.. issue-a-coming :) [19:24] kwmonroe: sounds like tons o' fun. [20:00] Very confused. Set up an Openstack cloud, put all my MAAS nodes in a zone, and tried to go HA -- ran into issues deploying nodes, so decided to back out. Now I can't "juju add-unit" anything: "cannot run instance: No available machine matches constraints: zone=default". Re-added all MAAS nodes to default; same issue. Any ideas before I nuke it all? [20:02] Ah. Just noticed a couple hundred of these, too: ERROR juju.state database.go:231 using unknown collection "remoteApplications" === narinder_out is now known as narinder [20:11] OK. Rebooting the Juju node still throws some errors, but it *did* result in nodes being deployed. [22:01] i don't think you can do this yet but can you tell juju to exclude a set of directories when deploying from a local directory? [22:08] cholcombe not as of yet [22:08] hatch: yeah i thought so. that would be really useful [22:09] yes very :) [22:09] I can't find the bug right now.... [22:09] there is/was one [22:09] oh cool [22:09] someone else had the same thought :) [22:10] cholcombe it might be worth filing another since I'm not able to find it [22:13] hatch: ok