wallyworldbabbageclunk: i may have found another --force case. it seems an app with a hook error in the install hook cannot be removed. not sure if you've come across that before00:52
babbageclunkno, I haven't seen that. At least it's easy to reproduce!00:53
wallyworldyeah, i had the ubuntu charm locally with an exit 1 in the instal lhook00:53
wallyworldand remove --force didn't00:53
wallyworldwill add it to the queue of things to look at00:54
wallyworldkelvinliu: so yeah, can confirm that it works for iaas, i'll try and investigate00:55
kelvinliuso it must be somewhere was missing for caas00:56
kelvinliuwallyworld: got this PR to fix encoding issue on secret data, +1 plz https://github.com/juju/juju/pull/10621  thanks!02:30
wallyworldkelvinliu: why is it dcoding the base54 data? the k8s secret spec Data attribute expects encoded data doesn't it?02:36
kelvinliuk8s always trys to encode02:37
wallyworldhmmmm, that seems to be in conflict with the comment on the Data attribute?02:39
wallyworld// Data contains the secret data. Each key must consist of alphanumeric02:39
wallyworld// characters, '-', '_' or '.'. The serialized form of the secret data is a02:39
wallyworld// base64 encoded string, representing the arbitrary (possibly non-string) data value here.02:39
wallyworldthe yaml examples appear to pass in encoded data02:40
kelvinliui tested02:45
kelvinliuif we pass encoded  to data directly, k8s will encode it again02:46
wallyworldi guess i don't understand why k8s struct needs to be created with unencoded values for both Data and StringData when the yaml doesn't02:47
wallyworld  username: administrator02:48
wallyworld  username: YWRtaW5pc3RyYXRvcg==02:48
wallyworldWhere YWRtaW5pc3RyYXRvcg== decodes to administrator02:48
kelvinliuwhen u mkubectl get -o yaml, the stringData is merged to Data and encoded as well02:49
kelvinliuafter applied, no stringData and no raw strings anymore, and all encoded.02:50
kelvinliuStringData is just a helper func02:50
wallyworldso you say "encode"02:51
wallyworldbut the PR decodes02:51
kelvinliuwe decode, then k8s encode02:51
kelvinliuif we don't decode, kubectl get result will be double encoded02:52
wallyworldseems weird to me but if that's how k8s works then who am i to argue. lgtm, ty02:53
wallyworldkelvinliu: i am still testing the resource upgrade thing. seems to work on 2.7, so i'll test again on 2.6 to see what's happenng02:58
kelvinliuso it works on caas 2.7?02:58
wallyworldi think so03:04
wallyworldbut i need to check more since it seems broken earlier03:04
wallyworldkelvinliu: ah, it works only for file type resources03:11
wallyworldkelvinliu: found the issue - there's a method which uses the resource fingerprint to see of the resource has changed, but for oci image resources, the fingerprint is always "" as we don't have the oci image to calculate the hash from03:24
kelvinliushould we simplge using image path?03:24
kelvinliufull path03:25
wallyworldsomething like that, looking into it03:25
wallyworldbabbageclunk: did you have vsphere creds handy, any chance you could pull joe's branch and smoke test bootstrap, deploy, ssh on vsphere?03:26
babbageclunkyeah, I do - sure03:26
wallyworldi've done k8s and azure and it looks good03:27
babbageclunklooking now03:27
wallyworldleave a comment on his PR03:27
wallyworldi tested bootstrap, deploy, ssh, add-unit03:27
kelvinliuhi wallyworld: got 2mins to discuss metadata change?05:51
wallyworldkelvinliu: sure06:13
kelvinliuwallyworld: wait, one more thing..06:25
kelvinliuyes, plz06:26
hpidcockwallyworld: tomorrow is fine but it's ready https://github.com/juju/juju/pull/1060607:37
wallyworldheading out to dinner soon so will look later or tmw07:38
hpidcockwallyworld: tomorrow sounds great enjoy your night07:38
wallyworldwill do, son's gf's b'day07:39
stickupkidmanadart, in regards to thumper email, isn't this the availability zone issue that I've got a PR for08:21
stickupkidmanadart, one that i keep opening and closing and never merging08:22
manadartstickupkid: Not sure. I was going to diff 2.6 vs develop to see why edge is OK, but 2.6 not...08:23
stickupkidmanadart, I did "git diff develop 2.6 provider/lxd"08:24
stickupkidmanadart, and "container/lxd"08:24
stickupkidmanadart, didn't show much tbh, other than the new packing, network and ineffassign stuff08:24
achilleasamanadart: the packing stuff landed yesterday so I would recommend diffing before that08:25
achilleasabtw, is that error from juju or from lxd?08:26
stickupkidachilleasa, lxd08:26
stickupkidachilleasa, it's because it's trying to unmarshall an error if I remember correctly08:26
achilleasaCould it be resolved if we delete the cached images?08:27
achilleasaI think I 've stumbled on that error before08:27
manadartDependency is the same too...08:28
stickupkidyeah first thing i checked08:28
stickupkidanyone else hit the GOCACHE isn't set when using 1.12, probably old news08:52
stickupkidanyone going to port forward 2.6 into develop, or shall I do it?09:26
pepperheadGood Morning! o/15:42
pepperheadWell, nearing afternoon already.15:42
pepperheadQuick Question hopefully: Are there instructions or capability to run a conjure-up deploy of15:42
pepperheadQuick Question hopefully: Are there instructions or capability of a charm bundle to run a deploy of OpenStack on four machines as nodes that ONLY have on drive and one NIC port avail? I now have maas w/juju bootstrapped on two other machines. Six machines total.15:45
pepperheadSorry, asked in conjure-up as well. But they are more of a one machine solution. Moved the Q over here.15:47
atdprhshello everyone, juju deploy works perfect with everything except when I try to deploy ceph-osd, I usually get "failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to16:30
atdprhs"storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)"16:30
atdprhs`failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)`16:30
atdprhsThis is following https://ubuntu.com/kubernetes/docs/storage / `juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1`16:30
pepperheadatdprhs I think Ceph needs a second drive, or at least specified in the config? I am trying to work around ceph drive req as well.16:33
atdprhsthanks pepperhead, do you know which config I should be checking in this regards?16:34
atdprhsafter 10 attempts, now it's `No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")`16:35
atdprhsquestion, what I do know `juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1` can be run as `juju deploy -n 3 ceph-osd --storage osd-devices=<storagePool>,32G,2 --storage osd-journals=<storagePool>,8G,1`16:38
atdprhsplease correct me if I'm mistaken, storage pool can be obtained via `virsh pool-list`16:39
atdprhsthe reason I'm looking into storage pool, cuz i'm thinking that maybe it's failing because of storage pool?16:40
atdprhspepperhead r u here?16:43
atdprhsI don't think storage pool could be the issue because it has free space just fine16:46
atdprhspepperhead you're right, it seems that it could be an issue with the second drive17:10
atdprhsI just tried to deploy nfs and it works just fine on my side, maybe i'll just stick to nfs17:10
pepperheadSorry stepped away to grab luinch. Just learning juju myself. Running into what seems to be similar roadblocks, ceph mainly.17:10
atdprhsthat's alright17:11
pepperheadUnsure but I think you can specify a directory instead of a drive in that config, but not sure how to get juju/maas to create the directory.17:11
pepperheadMaybe its a "curtin thing?"17:12
atdprhsI have no idea tbh17:12
atdprhsi've been stuck with this for more than 3 weeks tbh17:12
atdprhsmaybe for now, i can live with nfs, but i would have 1 question about it17:13
atdprhsI chose 1T for the disk size `juju deploy nfs --constraints root-disk=<disk-Size>` would it be better than having multiple ones with 200G?17:14
atdprhsconsidering that each 200G would be a machine that would consume 1 CPU17:15
pepperheadnfs? What does `juju deploy nfs --constraints root-disk=<disk-Size>` do?17:36
pepperheadahh https://jaas.ai/nfs/917:37
pepperheadso can juju nfs be added to the bundle to provide for a "second drive"?17:39
atdprhsI don't know what you mean by the 2nd drive, but I am following https://ubuntu.com/kubernetes/docs/storage17:40
atdprhs`Deploy NFS` worked for me17:41
pepperheadceph seemed to require a second drive17:41
pepperheador second location17:41
atdprhsyes, it does, but actually I just realized the VMs gets created but for some strange weird reason MaaS and juju did not see them17:42
atdprhsso I explain, I just realized `failed to start machine 15 (failed to acquire node: No available machine matches constraints: [('agent_name', ['623fe854-6d68-463d-8337-73ce136cc4bb']), ('storage', ['root:0,0:32,1:32,2:8']), ('zone', ['default'])] (resolved to "storage=root:0,0:32,1:32,2:8 zone=default")), retrying in 10s (10 more attempts)`17:42
atdprhsno, sorry, maybe from the start, I deploy 3 ceph-osd so 3 VMs got created + 10 more attempts above that failed so 3+(3*10)=3317:43
atdprhsI have just deleted 33 VMs manually17:43
pepperheadAll I know about ceph is that it is part of the openstack bundle17:44
atdprhsThere is something I thought about trying which is refresh my MaaS pod when it says it failed17:44
atdprhssome of those VMs actually got deployed too btw17:44
atdprhsoh, i am working on CDK17:44
pepperheadYeah I want to get openstack running, and then deploy kubernetes into openstack with terraform17:45
atdprhsGoogling around, ceph seems to be pretty new tech, nfs is more stable though and from what I see here, I think I am very happy with NFS since it works straight away17:45
pepperheadHoping that the same tf could deploy kubernetes into say...rackspace17:46
atdprhsi don't know about Openstack17:46
pepperheadTherefore managing kubernetes by state change17:46
atdprhsbut may I ask, what is the core difference between both?17:47
pepperheadOpenStack is a private cloud, IaaS.17:47
atdprhsI thought k8s is too?17:47
atdprhsI mean private cloud17:47
pepperheadSo with Openstack you can manage storage, networking, loadbalancing and all via terraform17:48
atdprhsahh seeing this >> https://www.terraform.io/17:49
atdprhs`Create reproducible infrastructure` << this is very nice17:49
pepperheadOpenstack provides the "hardware". So you spin your nodes, in my case LXD containers, in openstack.17:49
pepperheadIt provides the networking infrastructure as well as containers and such for k8s17:50
pepperheadSo OpenStack is an IaaS, and Kubernetes is a foundation on which to build a PaaS.17:52
atdprhsThanks pepperhead , now it makes sense17:52
atdprhsis it complex or easier with OpenStack?17:53
pepperheadAnd I was hoping JuJu was the magic to make the OpenStack easier to manage. But my available hardware is handicapped by not have but one hard drive and one NIC.17:54
atdprhsDo you have enough disk space on that HDD?17:54
pepperheadWell, OpenStack adds complexity, but gets you closer to working with deploying k8s to a cloud.17:54
pepperheadThe company I recently joined doesnt have Metal, but they want a private cloud. Trying to make do with Intel NUCs17:55
atdprhsoh BTW, considering that you're trying to deploy osd like i did, there is a chance you might end upw ith 33 VM like I did :D17:56
atdprhsdid you check your LXD's containers17:56
pepperheadI grabbed a Dell Poweredge r720 used from a guy on cragslist, and got openstack up in a couple hours17:56
pepperheadLOL. I am deploying to metal, Intel NUCs17:57
pepperheadA stack of them17:57
pepperheadMaaS is managing them, and Juju is theoretically directing MaaS.17:58
atdprhskind of yes17:58
pepperheadThe Openstack Basic charm bundle looks perfect17:58
atdprhsIn my case, I am using KVM instead of LXD (I could've used LXD)17:58
pepperheadI think LXD is lighter on resources overall, as they use a shared kernel17:59
atdprhsSo MaaS is metal as a service17:59
pepperheadIt kinda turns a machine into a VM17:59
atdprhsyes, something goes wrong with LXD, can be a risk factor with shared kernel afaik18:00
pepperheadAnd juju is developed along side it. I thiunk it can also manage VM's18:00
atdprhsyou can either have juju to use LXD direct or use MaaS (private cloud)18:00
pepperheadAn LxD container doesnt hold the kernel, so it can only harm itseself I think18:01
atdprhsI used https://conjure-up.io/18:01
pepperheadI got DevStack runniung on my single server, works great, about to try conjure-up18:01
pepperheadHow did the conjure-up work out?18:08
pepperheadOn a single machine?18:08
atdprhsconjure-up is kind of like a wizard takes through steps to deploy your environment18:09
atdprhsif you already deployed devstack then you don't need it18:09
atdprhsif you haven't and need to deploy openstack18:09
pepperheadBut starts you getting familiar with juju/maas I assume18:10
atdprhsnot really, i actually jumped straight into conjure-up then started to learn later juju18:10
pepperheadJust doing conjure up to see how or if it differs in performance from devstack18:10
atdprhsconjure-up basically does the installation part for you, further customizations and configurations can be later carried out by you manually via juju18:11
pepperheadI wish juju-gui allowed for adding a hardware item, like a directory, before running the bundle. Like adding nfs to the bundle and telling it to create a drive of X size from the first drive, then deploying ceph to it.18:13
atdprhsmaas on the other hand can be the very minimal version of openstack for me, it just allows me to manage my metal servers as if they are cloud18:13
atdprhsso maas is like a private cloud that serves me the hardware and i use conjure-up to kickoff k8s on maas18:13
pepperheadSee I want to build and manage k8s with terraform18:14
atdprhsI'm not expert tbh, but maybe this could help >> https://tutorials.ubuntu.com/tutorial/install-openstack-with-conjure-up18:15
atdprhsin term so storage, I can't honestly help you with it, I already gave up on ceph and currently using NFS18:16
atdprhsI spent over 3 weeks on ceph a  lone18:16
atdprhsmaybe this can help >> https://ubuntu.com/openstack/storage18:18
pepperheadOHHhhhh, so Ceph is just storage. So INSTEAD of Ceph you deployed nfs and use it for storage. The only thing I know of ceph is that it is in the way of allowing my openstack to build18:19
pepperheadI thought you used nfs to create a point for Ceph to store its "stuff"18:19
pepperheadTypically Ceph wants and entire drive and it takes it for its "stuff"18:20
atdprhsno, i used NFS as storage for my k8s18:20
pepperheadI have heard it being like using ZFS18:20
pepperheadCreating an array18:20
atdprhsin k8s, you have the choice to choose between CEPH and NFS18:21
atdprhsI guess in openstack, you have to choose between Swift and Ceph  as per https://ubuntu.com/openstack/storage18:21
atdprhszfs is a storage managed by Ubuntu I guess, that's host OS file system management18:22
atdprhsyes to > "OHHhhhh, so Ceph is just storage. So INSTEAD of Ceph you deployed nfs and use it for storage."18:25
pepperheadNow I get it. I think openstack uses Ceph as an array of storage, possibly redundancy in case one fails?18:26
pepperheadKinda like a zfs1 array.18:27
atdprhsChopper3 explained the difference well here >> https://serverfault.com/a/91158218:28
pepperheadNice post, gratzi for the heads up!18:30
atdprhsno worries, 3 weeks didn't go by that easy18:30
pepperheadI set up a freenas server with 18TB of storage, and CAN share that out as NFS. Something to think about.18:30
atdprhsdoes openstack support nfs?18:31
pepperheadI use it for storing movies :)18:31
atdprhsI use plex18:31
pepperheadI think it can, but thats my home machine, I would have to talk the company into even MORE hardware for that here18:31
pepperheadYes, I run plex to srerve the shows stored on freenas18:32
pepperheadI use zfs2 array on freenas, I would need to lose three physical drives to actually lose data18:33
atdprhsahh i thoguht freenas is like plex18:33
pepperheadIts just a storage server18:34
pepperheadTake big pile of drives and turn them into a network served array of storage18:34
atdprhsi usually upload my library to plex server itselff18:35
atdprhsmaybe i'll try freenas18:35
atdprhsi am going to bed, it's almost 5 AM18:37
pepperheadWhere are you?18:37
pepperheadOi down under18:38
pepperheadGood luck and get some sleep mate!18:38
atdprhslol, yup it is18:38
atdprhsthx, and goodnight18:38
pepperheadI am in Atlanta Georgia18:38
pepperheadworlds apart. G'night18:38
ec0hey all, building a charm and charm proof is returning an ascii decode error, can't find any files which could be causing this issue in the charm: https://paste.ubuntu.com/p/qqjzBn4YFh/21:40
ec0is anyone else seeing this if they try to charm proof with the latest snap installed charm?21:40
ec0I'll go file a bug, but just curious if it's just me21:40
ec0have tried charm all the way up to edge21:41
rick_hec0:  hmm, what version of python?21:43
ec0on my host, 3.7.321:43
ec0the snap appears to be using 3.6....something21:44
rick_hhmm, I ran charm proof today w/o issue but not sure on the version/update today or the like21:44
ec0interesting, I just tried on a different charm I haven't touched in a while and it proofed21:44
ec0I've checked the git history for this charm and can't see any occurrences of 0xe6 in the changed files21:45
ec0(grep -P '\xe6' * -R21:45
ec0for example21:45
ec0interesting, line 380 is reading the README21:46
rick_hmaybe wipe the readme and see if it proofs21:47
ec0trying now21:47
ec0hmm, no sadly21:47
ec0OK, I'll dig at this a little bit more when I've got time to swing back to it, and file an issue if needed, thanks for checking rick_h21:53
ec0oh lol, rick_h - I built my own copy of charm tools, and added some extra debugging, charm proof is trying to read ".README.md.swp", which is obviously the vim swap file.22:08
ec0I'll file an issue22:08
ec0seems related to https://github.com/juju/charm-tools/issues/42122:11

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!