[07:10] <kjackal> Good morning Juju world!
[07:46] <BlackDex> Hello there
[07:46] <BlackDex> I have an environment where i have juju in HA using 3 nodes
[07:47] <BlackDex> unfortunally i need to remove 2 of those nodes now because of maintenance
[07:47] <BlackDex> how can i remove those nodes manually and afterwards just create new ones?
[07:48] <BlackDex> can i juse do juju remove-machine on the controller model?
[07:53] <kjackal_> Hi BlackDex, based on this: https://jujucharms.com/docs/2.1/controllers-ha
[07:54] <kjackal_> BlackDex: "The only way to decrease is to create a backup of your environment and then restore the backup to a new environment, which starts with a single controller. You can then increase to the desired number."
[07:54] <kjackal_> BlackDex: never done that
[09:14] <BlackDex> kjackal: Well, one controller is still a controller right
[09:14] <BlackDex> ill first create a backup, and remove the 2 machines
[09:28] <BlackDex> hmm shutting down the 2 nodes prevents me from using `juju` at all
[09:28] <BlackDex> thats not really ha then :)
[12:56] <lazyPower> o/ kjackal /buffer 2
[12:56] <kjackal> hello lazyPower
[12:57] <kjackal> what's this code language "/buffer 2"
[12:57] <kjackal> ?
[12:57] <kjackal> lazyPower: ^
[12:58] <lazyPower> lol sorry
[12:58] <lazyPower> early morning, weechat buffer switching
[13:25] <freyes> hello o/ , where can I find the snapcraft recipe used to build the 'charm' snap?
[14:46] <Zic> hi here
[14:48] <Zic> My CDK cluster running fully@AWS cannot manipulate PV as EBS, I saw on GitHub that it's maybe tied to "CLOUD_PROVIDER" var, do you have any documentation about it?
[14:50] <lazyPower> Zic: we haven't released cloud integration points yet. We're still in a bare-metal first representation of k8s. Work has been started to enable that in a juju native way, but its still WIP
[14:51] <Zic> thanks for the info :)
[14:51] <Zic> lazyPower: do you have any recommendation for PersistentVolume @ AWS so? just a manually created EBS ?
[14:51] <Zic> (or EFS)
[14:52] <lazyPower> Zic: i've been using EFS with great success
[14:52] <lazyPower> but its been only via testing, and i use the NFS volume type
[14:52] <lazyPower> so ymmv - i haven't done battle-hardened testing here, just exploratory work
[14:54] <Zic> lazyPower: is "cloud integration" just control the fact that CDK cluster can auto-create EBS on-demand, or also the fact that K8s can mount EBS volume?
[14:54] <Zic> because I just tried to create the EBS manually and mount it
[14:54] <Zic> -> found invalid field AWSElasticBlockStore for v1.PersistentVolumeSpec
[14:54] <lazyPower> Zic: no, this would be for cloud level integrations. Giving your nodes IAM profiles to request things from the cloud on your behalf
[14:55] <lazyPower> ebs volumes, ELB load balancers, et-al
[14:55] <Zic> ok so I can normally mount manually-created EBS, I need to find why it did not work :(
[14:57] <Zic> I don't need my cluster to auto-create EBS on-demand, I just need the ability to mount them as PersistentVolume :D
[15:07] <Zic> lazyPower: so just to be sure, you confirm that I normally can use this https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore, but I need to create EBS volume manually (as it's precised in the link) before?
[15:08] <Zic> or mounting EBS is also part of cloud-integration and I need to stick with EFS
[15:08] <Zic> (EFS is cool, but not for large micro-access)
[15:19] <Zic> an other solution will be to add "--cloud-provider=aws" to /etc/default/kube-apiserver and kube-controller-manager on master side, and to /etc/default/kubelet node-side
[15:19] <Zic> but Juju will erase this extra-parameter
[17:06] <lazyPower> Zic: you can also place that in the templates in /var/lib/juju/agents/unit-kubernetes-master-#/charm/templates.  But note that once you modify those templates you're making a snowflake.
[17:07] <lazyPower> Zic: but if you strawman out what you're looking for, and propose it, we can certainly review the contribution and help get it landed.
[19:18] <kwmonroe> petevg: cory_fu:  curious about the matrix... http://paste.ubuntu.com/24262725/ is my status output for gce and aws.  you see the core count for those matrix models?  it's as if they deployed with no machine constraints.  contrast that with the non-matrix models in the same job.
[19:18] <kwmonroe> petevg: cory_fu:  i verified that a sample gce matrix unit did in fact have 1 cpu and 2gb ram.  that's not in accordance with any of the constraints set in my bundle.
[19:18] <petevg> kwmonroe: matrix can spin up additional machines, remember.
[19:19] <petevg> kwmonroe: but there might still be stuff missing when translating constraints in python-libjuju.
[19:20] <kwmonroe> ok petevg, so this isn't expected for the matrix first-pass (vs chaos), right?  as in, i should expect my bundle constraints to be honored by matrix?
[19:20] <petevg> That section of the websocket api is kind of broken. (The websocket api doesn't like parsing the plan that it generates.)
[19:20] <petevg> kwmonroe: yes. On the first pass, matrix should honor your constraints.
[19:21] <kwmonroe> thx petevg.. issue-a-coming :)
[19:24] <petevg> kwmonroe: sounds like tons o' fun.
[20:00] <ken_> Very confused.  Set up an Openstack cloud, put all my MAAS nodes in a zone, and tried to go HA -- ran into issues deploying nodes, so decided to back out.  Now I can't "juju add-unit" anything: "cannot run instance: No available machine matches constraints: zone=default".  Re-added all MAAS nodes to default; same issue.  Any ideas before I nuke it all?
[20:02] <ken_> Ah.  Just noticed a couple hundred of these, too: ERROR juju.state database.go:231 using unknown collection "remoteApplications"
[20:11] <ken_> OK.  Rebooting the Juju node still throws some errors, but it *did* result in nodes being deployed.
[22:01] <cholcombe> i don't think you can do this yet but can you tell juju to exclude a set of directories when deploying from a local directory?
[22:08] <hatch> cholcombe not as of yet
[22:08] <cholcombe> hatch: yeah i thought so.  that would be really useful
[22:09] <hatch> yes very :)
[22:09] <hatch> I can't find the bug right now....
[22:09] <hatch> there is/was one
[22:09] <cholcombe> oh cool
[22:09] <cholcombe> someone else had the same thought :)
[22:10] <hatch> cholcombe it might be worth filing another since I'm not able to find it
[22:13] <cholcombe> hatch: ok