[00:52] <wallyworld> babbageclunk: i may have found another --force case. it seems an app with a hook error in the install hook cannot be removed. not sure if you've come across that before
[00:53] <babbageclunk> blech
[00:53] <babbageclunk> no, I haven't seen that. At least it's easy to reproduce!
[00:53] <wallyworld> yeah, i had the ubuntu charm locally with an exit 1 in the instal lhook
[00:53] <wallyworld> and remove --force didn't
[00:54] <wallyworld> will add it to the queue of things to look at
[00:54] <babbageclunk> ok
[00:55] <wallyworld> kelvinliu: so yeah, can confirm that it works for iaas, i'll try and investigate
[00:56] <kelvinliu> so it must be somewhere was missing for caas
[02:30] <kelvinliu> wallyworld: got this PR to fix encoding issue on secret data, +1 plz https://github.com/juju/juju/pull/10621  thanks!
[02:30] <wallyworld> ok
[02:36] <wallyworld> kelvinliu: why is it dcoding the base54 data? the k8s secret spec Data attribute expects encoded data doesn't it?
[02:37] <kelvinliu> k8s always trys to encode
[02:37] <kelvinliu> again
[02:39] <wallyworld> hmmmm, that seems to be in conflict with the comment on the Data attribute?
[02:39] <wallyworld> 	// Data contains the secret data. Each key must consist of alphanumeric
[02:39] <wallyworld> 	// characters, '-', '_' or '.'. The serialized form of the secret data is a
[02:39] <wallyworld> 	// base64 encoded string, representing the arbitrary (possibly non-string) data value here.
[02:40] <wallyworld> the yaml examples appear to pass in encoded data
[02:45] <kelvinliu> i tested
[02:46] <kelvinliu> if we pass encoded  to data directly, k8s will encode it again
[02:47] <wallyworld> i guess i don't understand why k8s struct needs to be created with unencoded values for both Data and StringData when the yaml doesn't
[02:48] <wallyworld> ie
[02:48] <wallyworld> stringData:
[02:48] <wallyworld>   username: administrator
[02:48] <wallyworld> vs
[02:48] <wallyworld> data:
[02:48] <wallyworld>   username: YWRtaW5pc3RyYXRvcg==
[02:48] <wallyworld> Where YWRtaW5pc3RyYXRvcg== decodes to administrator
[02:49] <kelvinliu> when u mkubectl get -o yaml, the stringData is merged to Data and encoded as well
[02:50] <kelvinliu> after applied, no stringData and no raw strings anymore, and all encoded.
[02:50] <kelvinliu> StringData is just a helper func
[02:51] <wallyworld> so you say "encode"
[02:51] <wallyworld> but the PR decodes
[02:51] <kelvinliu> we decode, then k8s encode
[02:51] <wallyworld> ok
[02:52] <kelvinliu> if we don't decode, kubectl get result will be double encoded
[02:53] <wallyworld> seems weird to me but if that's how k8s works then who am i to argue. lgtm, ty
[02:55] <kelvinliu> thx
[02:58] <wallyworld> kelvinliu: i am still testing the resource upgrade thing. seems to work on 2.7, so i'll test again on 2.6 to see what's happenng
[02:58] <kelvinliu> so it works on caas 2.7?
[03:04] <wallyworld> i think so
[03:04] <wallyworld> but i need to check more since it seems broken earlier
[03:11] <wallyworld> kelvinliu: ah, it works only for file type resources
[03:13] <kelvinliu> ah
[03:24] <wallyworld> kelvinliu: found the issue - there's a method which uses the resource fingerprint to see of the resource has changed, but for oci image resources, the fingerprint is always "" as we don't have the oci image to calculate the hash from
[03:24] <kelvinliu> should we simplge using image path?
[03:25] <kelvinliu> full path
[03:25] <wallyworld> something like that, looking into it
[03:26] <wallyworld> babbageclunk: did you have vsphere creds handy, any chance you could pull joe's branch and smoke test bootstrap, deploy, ssh on vsphere?
[03:26] <babbageclunk> yeah, I do - sure
[03:27] <wallyworld> i've done k8s and azure and it looks good
[03:27] <wallyworld> ty
[03:27] <babbageclunk> looking now
[03:27] <wallyworld> leave a comment on his PR
[03:27] <wallyworld> i tested bootstrap, deploy, ssh, add-unit
[05:51] <kelvinliu> hi wallyworld: got 2mins to discuss metadata change?
[06:13] <wallyworld> kelvinliu: sure
[06:14] <kelvinliu> HO?
[06:25] <kelvinliu> wallyworld: wait, one more thing..
[06:25] <wallyworld> HO?
[06:26] <kelvinliu> yes, plz
[07:37] <hpidcock> wallyworld: tomorrow is fine but it's ready https://github.com/juju/juju/pull/10606
[07:38] <wallyworld> ok
[07:38] <wallyworld> heading out to dinner soon so will look later or tmw
[07:38] <hpidcock> wallyworld: tomorrow sounds great enjoy your night
[07:39] <wallyworld> will do, son's gf's b'day
[08:21] <stickupkid> manadart, in regards to thumper email, isn't this the availability zone issue that I've got a PR for
[08:22] <stickupkid> manadart, one that i keep opening and closing and never merging
[08:23] <manadart> stickupkid: Not sure. I was going to diff 2.6 vs develop to see why edge is OK, but 2.6 not...
[08:24] <stickupkid> manadart, I did "git diff develop 2.6 provider/lxd"
[08:24] <stickupkid> manadart, and "container/lxd"
[08:24] <stickupkid> manadart, didn't show much tbh, other than the new packing, network and ineffassign stuff
[08:25] <achilleasa> manadart: the packing stuff landed yesterday so I would recommend diffing before that
[08:25] <achilleasa> s/packing/packaging/
[08:26] <achilleasa> btw, is that error from juju or from lxd?
[08:26] <stickupkid> achilleasa, lxd
[08:26] <stickupkid> achilleasa, it's because it's trying to unmarshall an error if I remember correctly
[08:27] <achilleasa> Could it be resolved if we delete the cached images?
[08:27] <achilleasa> I think I 've stumbled on that error before
[08:27] <stickupkid> dunno
[08:27] <stickupkid> probably
[08:28] <manadart> Dependency is the same too...
[08:28] <stickupkid> yeah first thing i checked
[08:52] <stickupkid> anyone else hit the GOCACHE isn't set when using 1.12, probably old news
[09:26] <stickupkid> anyone going to port forward 2.6 into develop, or shall I do it?
[15:42] <pepperhead> Good Morning! o/
[15:42] <pepperhead> Well, nearing afternoon already.
[15:42] <pepperhead> Quick Question hopefully: Are there instructions or capability to run a conjure-up deploy of
[15:45] <pepperhead> Quick Question hopefully: Are there instructions or capability of a charm bundle to run a deploy of OpenStack on four machines as nodes that ONLY have on drive and one NIC port avail? I now have maas w/juju bootstrapped on two other machines. Six machines total.
[15:47] <pepperhead> Sorry, asked in conjure-up as well. But they are more of a one machine solution. Moved the Q over here.
[16:30] <atdprhs> hello everyone, juju deploy works perfect with everything except when I try to deploy ceph-osd, I usually get "failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to
[16:30] <atdprhs> "storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)"
[16:30] <atdprhs> `failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)`
[16:30] <atdprhs> This is following https://ubuntu.com/kubernetes/docs/storage / `juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1`
[16:33] <pepperhead> atdprhs I think Ceph needs a second drive, or at least specified in the config? I am trying to work around ceph drive req as well.
[16:34] <atdprhs> thanks pepperhead, do you know which config I should be checking in this regards?
[16:35] <atdprhs> after 10 attempts, now it's `No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")`
[16:38] <atdprhs> question, what I do know `juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1` can be run as `juju deploy -n 3 ceph-osd --storage osd-devices=<storagePool>,32G,2 --storage osd-journals=<storagePool>,8G,1`
[16:39] <atdprhs> please correct me if I'm mistaken, storage pool can be obtained via `virsh pool-list`
[16:40] <atdprhs> the reason I'm looking into storage pool, cuz i'm thinking that maybe it's failing because of storage pool?
[16:43] <atdprhs> pepperhead r u here?
[16:46] <atdprhs> I don't think storage pool could be the issue because it has free space just fine
[17:10] <atdprhs> pepperhead you're right, it seems that it could be an issue with the second drive
[17:10] <atdprhs> I just tried to deploy nfs and it works just fine on my side, maybe i'll just stick to nfs
[17:10] <pepperhead> Sorry stepped away to grab luinch. Just learning juju myself. Running into what seems to be similar roadblocks, ceph mainly.
[17:11] <atdprhs> that's alright
[17:11] <pepperhead> Unsure but I think you can specify a directory instead of a drive in that config, but not sure how to get juju/maas to create the directory.
[17:12] <pepperhead> Maybe its a "curtin thing?"
[17:12] <atdprhs> I have no idea tbh
[17:12] <atdprhs> i've been stuck with this for more than 3 weeks tbh
[17:13] <atdprhs> maybe for now, i can live with nfs, but i would have 1 question about it
[17:14] <atdprhs> I chose 1T for the disk size `juju deploy nfs --constraints root-disk=<disk-Size>` would it be better than having multiple ones with 200G?
[17:15] <atdprhs> considering that each 200G would be a machine that would consume 1 CPU
[17:36] <pepperhead> nfs? What does `juju deploy nfs --constraints root-disk=<disk-Size>` do?
[17:37] <pepperhead> ahh https://jaas.ai/nfs/9
[17:39] <pepperhead> so can juju nfs be added to the bundle to provide for a "second drive"?
[17:40] <atdprhs> I don't know what you mean by the 2nd drive, but I am following https://ubuntu.com/kubernetes/docs/storage
[17:41] <atdprhs> `Deploy NFS` worked for me
[17:41] <atdprhs> @pep
[17:41] <pepperhead> ceph seemed to require a second drive
[17:41] <pepperhead> or second location
[17:42] <atdprhs> yes, it does, but actually I just realized the VMs gets created but for some strange weird reason MaaS and juju did not see them
[17:42] <atdprhs> so I explain, I just realized `failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)`
[17:43] <atdprhs> no, sorry, maybe from the start, I deploy 3 ceph-osd so 3 VMs got created + 10 more attempts above that failed so 3+(3*10)=33
[17:43] <atdprhs> I have just deleted 33 VMs manually
[17:43] <pepperhead> ouch
[17:44] <pepperhead> All I know about ceph is that it is part of the openstack bundle
[17:44] <atdprhs> There is something I thought about trying which is refresh my MaaS pod when it says it failed
[17:44] <atdprhs> some of those VMs actually got deployed too btw
[17:44] <atdprhs> oh, i am working on CDK
[17:44] <atdprhs> Kubernetes
[17:45] <pepperhead> Yeah I want to get openstack running, and then deploy kubernetes into openstack with terraform
[17:45] <atdprhs> Googling around, ceph seems to be pretty new tech, nfs is more stable though and from what I see here, I think I am very happy with NFS since it works straight away
[17:46] <pepperhead> Hoping that the same tf could deploy kubernetes into say...rackspace
[17:46] <atdprhs> i don't know about Openstack
[17:46] <pepperhead> Therefore managing kubernetes by state change
[17:47] <atdprhs> but may I ask, what is the core difference between both?
[17:47] <pepperhead> OpenStack is a private cloud, IaaS.
[17:47] <atdprhs> I thought k8s is too?
[17:47] <atdprhs> I mean private cloud
[17:48] <pepperhead> So with Openstack you can manage storage, networking, loadbalancing and all via terraform
[17:49] <atdprhs> ahh seeing this >> https://www.terraform.io/
[17:49] <atdprhs> `Create reproducible infrastructure` << this is very nice
[17:49] <pepperhead> Openstack provides the "hardware". So you spin your nodes, in my case LXD containers, in openstack.
[17:50] <pepperhead> It provides the networking infrastructure as well as containers and such for k8s
[17:52] <pepperhead> So OpenStack is an IaaS, and Kubernetes is a foundation on which to build a PaaS.
[17:52] <atdprhs> Thanks pepperhead , now it makes sense
[17:53] <atdprhs> is it complex or easier with OpenStack?
[17:54] <pepperhead> And I was hoping JuJu was the magic to make the OpenStack easier to manage. But my available hardware is handicapped by not have but one hard drive and one NIC.
[17:54] <atdprhs> Do you have enough disk space on that HDD?
[17:54] <pepperhead> Well, OpenStack adds complexity, but gets you closer to working with deploying k8s to a cloud.
[17:55] <pepperhead> The company I recently joined doesnt have Metal, but they want a private cloud. Trying to make do with Intel NUCs
[17:56] <atdprhs> oh BTW, considering that you're trying to deploy osd like i did, there is a chance you might end upw ith 33 VM like I did :D
[17:56] <atdprhs> did you check your LXD's containers
[17:56] <pepperhead> I grabbed a Dell Poweredge r720 used from a guy on cragslist, and got openstack up in a couple hours
[17:57] <pepperhead> LOL. I am deploying to metal, Intel NUCs
[17:57] <pepperhead> A stack of them
[17:58] <pepperhead> MaaS is managing them, and Juju is theoretically directing MaaS.
[17:58] <atdprhs> kind of yes
[17:58] <pepperhead> The Openstack Basic charm bundle looks perfect
[17:58] <atdprhs> In my case, I am using KVM instead of LXD (I could've used LXD)
[17:59] <pepperhead> I think LXD is lighter on resources overall, as they use a shared kernel
[17:59] <atdprhs> So MaaS is metal as a service
[17:59] <pepperhead> yes
[17:59] <pepperhead> It kinda turns a machine into a VM
[18:00] <atdprhs> yes, something goes wrong with LXD, can be a risk factor with shared kernel afaik
[18:00] <pepperhead> And juju is developed along side it. I thiunk it can also manage VM's
[18:00] <atdprhs> you can either have juju to use LXD direct or use MaaS (private cloud)
[18:01] <pepperhead> An LxD container doesnt hold the kernel, so it can only harm itseself I think
[18:01] <atdprhs> I used https://conjure-up.io/
[18:01] <pepperhead> I got DevStack runniung on my single server, works great, about to try conjure-up
[18:06] <atdprhs> (y)
[18:08] <pepperhead> How did the conjure-up work out?
[18:08] <pepperhead> On a single machine?
[18:09] <atdprhs> conjure-up is kind of like a wizard takes through steps to deploy your environment
[18:09] <atdprhs> if you already deployed devstack then you don't need it
[18:09] <atdprhs> if you haven't and need to deploy openstack
[18:10] <pepperhead> But starts you getting familiar with juju/maas I assume
[18:10] <atdprhs> not really, i actually jumped straight into conjure-up then started to learn later juju
[18:10] <pepperhead> Just doing conjure up to see how or if it differs in performance from devstack
[18:11] <atdprhs> conjure-up basically does the installation part for you, further customizations and configurations can be later carried out by you manually via juju
[18:13] <pepperhead> I wish juju-gui allowed for adding a hardware item, like a directory, before running the bundle. Like adding nfs to the bundle and telling it to create a drive of X size from the first drive, then deploying ceph to it.
[18:13] <atdprhs> maas on the other hand can be the very minimal version of openstack for me, it just allows me to manage my metal servers as if they are cloud
[18:13] <atdprhs> so maas is like a private cloud that serves me the hardware and i use conjure-up to kickoff k8s on maas
[18:14] <pepperhead> See I want to build and manage k8s with terraform
[18:14] <pepperhead> IaC
[18:15] <atdprhs> I'm not expert tbh, but maybe this could help >> https://tutorials.ubuntu.com/tutorial/install-openstack-with-conjure-up
[18:16] <atdprhs> in term so storage, I can't honestly help you with it, I already gave up on ceph and currently using NFS
[18:16] <atdprhs> I spent over 3 weeks on ceph a  lone
[18:17] <pepperhead> ouch
[18:18] <atdprhs> maybe this can help >> https://ubuntu.com/openstack/storage
[18:19] <pepperhead> OHHhhhh, so Ceph is just storage. So INSTEAD of Ceph you deployed nfs and use it for storage. The only thing I know of ceph is that it is in the way of allowing my openstack to build
[18:19] <pepperhead> I thought you used nfs to create a point for Ceph to store its "stuff"
[18:20] <pepperhead> Typically Ceph wants and entire drive and it takes it for its "stuff"
[18:20] <atdprhs> no, i used NFS as storage for my k8s
[18:20] <pepperhead> I have heard it being like using ZFS
[18:20] <pepperhead> Creating an array
[18:21] <atdprhs> in k8s, you have the choice to choose between CEPH and NFS
[18:21] <atdprhs> I guess in openstack, you have to choose between Swift and Ceph  as per https://ubuntu.com/openstack/storage
[18:22] <atdprhs> zfs is a storage managed by Ubuntu I guess, that's host OS file system management
[18:25] <atdprhs> yes to > "OHHhhhh, so Ceph is just storage. So INSTEAD of Ceph you deployed nfs and use it for storage."
[18:26] <pepperhead> Now I get it. I think openstack uses Ceph as an array of storage, possibly redundancy in case one fails?
[18:27] <pepperhead> Kinda like a zfs1 array.
[18:28] <atdprhs> Chopper3 explained the difference well here >> https://serverfault.com/a/911582
[18:30] <pepperhead> Nice post, gratzi for the heads up!
[18:30] <atdprhs> no worries, 3 weeks didn't go by that easy
[18:30] <pepperhead> I set up a freenas server with 18TB of storage, and CAN share that out as NFS. Something to think about.
[18:31] <atdprhs> does openstack support nfs?
[18:31] <pepperhead> I use it for storing movies :)
[18:31] <atdprhs> lol
[18:31] <atdprhs> I use plex
[18:31] <pepperhead> I think it can, but thats my home machine, I would have to talk the company into even MORE hardware for that here
[18:32] <pepperhead> Yes, I run plex to srerve the shows stored on freenas
[18:33] <pepperhead> I use zfs2 array on freenas, I would need to lose three physical drives to actually lose data
[18:33] <atdprhs> ahh i thoguht freenas is like plex
[18:34] <pepperhead> Its just a storage server
[18:34] <pepperhead> Take big pile of drives and turn them into a network served array of storage
[18:35] <atdprhs> i usually upload my library to plex server itselff
[18:35] <atdprhs> maybe i'll try freenas
[18:37] <atdprhs> i am going to bed, it's almost 5 AM
[18:37] <pepperhead> WHAT!?
[18:37] <pepperhead> Where are you?
[18:37] <atdprhs> Australia
[18:38] <pepperhead> Oi down under
[18:38] <pepperhead> Good luck and get some sleep mate!
[18:38] <atdprhs> lol, yup it is
[18:38] <atdprhs> thx, and goodnight
[18:38] <pepperhead> I am in Atlanta Georgia
[18:38] <pepperhead> worlds apart. G'night
[18:39] <atdprhs> agreed
[21:40] <ec0> hey all, building a charm and charm proof is returning an ascii decode error, can't find any files which could be causing this issue in the charm: https://paste.ubuntu.com/p/qqjzBn4YFh/
[21:40] <ec0> is anyone else seeing this if they try to charm proof with the latest snap installed charm?
[21:40] <ec0> I'll go file a bug, but just curious if it's just me
[21:41] <ec0> have tried charm all the way up to edge
[21:43] <rick_h> ec0:  hmm, what version of python?
[21:43] <ec0> on my host, 3.7.3
[21:44] <ec0> the snap appears to be using 3.6....something
[21:44] <rick_h> hmm, I ran charm proof today w/o issue but not sure on the version/update today or the like
[21:44] <ec0> interesting, I just tried on a different charm I haven't touched in a while and it proofed
[21:45] <ec0> I've checked the git history for this charm and can't see any occurrences of 0xe6 in the changed files
[21:45] <ec0> (grep -P '\xe6' * -R
[21:45] <ec0> for example
[21:46] <ec0> interesting, line 380 is reading the README
[21:47] <rick_h> maybe wipe the readme and see if it proofs
[21:47] <ec0> trying now
[21:47] <ec0> hmm, no sadly
[21:53] <ec0> OK, I'll dig at this a little bit more when I've got time to swing back to it, and file an issue if needed, thanks for checking rick_h
[22:08] <ec0> oh lol, rick_h - I built my own copy of charm tools, and added some extra debugging, charm proof is trying to read ".README.md.swp", which is obviously the vim swap file.
[22:08] <ec0> I'll file an issue
[22:11] <ec0> seems related to https://github.com/juju/charm-tools/issues/421