[00:03] <kelvinliu> wallyworld: I just had a test, juju bootstrap to microk8s, we have rbac resource created. and https://pastebin.ubuntu.com/p/5m5Bj9Ytg2/
[00:03] <wallyworld> kelvinliu: did you clear out your local clouds.yaml first?
[00:04] <kelvinliu> but the rbac resources won't be deleted if the ctrl is killed/destroyed. there is card for this https://trello.com/c/KrqgRMeH/951-remove-mcirok8s-credential-in-destroy-kill-controller
[00:04] <wallyworld> and credentials.yaml
[00:04] <kelvinliu> yes, I did
[00:04] <wallyworld> and your cred.yaml is missing the rbac-id?
[00:06] <kelvinliu> the rbac-id for microk8s is `microk8s`
[00:06] <kelvinliu> it's not a uid
[00:06] <kelvinliu> because we have only one buildin k8s cloud
[00:07] <wallyworld> and yet when i run cat microk8s.config | juju add-k8s foo it does put in a rbac-id
[00:07] <wallyworld> so it's not consistent
[00:07] <wallyworld> and there's also now microk8s clusters
[00:07] <kelvinliu> if it's add-k8s, then u can have many microk8s cloud but it's not a buildin cloud anymore
[00:07] <wallyworld> so you could access externally like any other k8s cloud
[00:08] <wallyworld> that is true
[00:10] <kelvinliu> the `juju-credential-microk8s` rbac resources are not cleared after controller is destroyed because the credential is not deleted which is a todo as the card said
[00:27] <thumper> wallyworld: sorry, was out having lunch
[00:28] <thumper> quick k8s questions with units
[00:28] <wallyworld> no worries, just wanted to touch base on a charm issue
[00:28] <thumper> do we call hostDestroyOps?
[00:28] <wallyworld> in standup now for reals
[00:28] <thumper> sorry
[00:28] <thumper> destroyHostOps
[00:28] <wallyworld> i'll check after standup
[00:28] <thumper> ack
[00:32] <thumper> wallyworld: actually when you're done I should talk with you about k8s models and the all watcher
[00:32] <thumper> I'm in the code now, and I'm worried
[00:35] <thumper> actually less worried now
[00:35]  * thumper keeps reading
[00:35] <babbageclunk> phew
[00:36] <tlm[m]> ?
[00:37] <wallyworld> thumper: wanna jump in standup?
[00:37] <thumper> yeah
[00:37] <thumper> omw
[03:35] <thumper> ugh... some of our tests are awful
[03:38] <thumper> who would have thought that a simple denormalisation would trigger so many bad tests
[03:38] <thumper> ok... I wouldn't have thought that
[03:39] <thumper> but in many places, we were creating subordinates for units that weren't assigned to machines
[03:39] <thumper> which is kinda bad
[05:01] <thumper> some of our tests are incredibly contrived situations that can never happen in the normal flow of real deployments
[05:01] <thumper> like opening a port on a unit that isn't assigned to a machine
[05:01] <thumper> that'll never happen
[09:57] <flxfoo> hi all
[09:57] <flxfoo> is there any charm for a redis cluster running for bionic?
[10:21] <stickupkid> flxfoo, https://jaas.ai/u/omnivector/redis-cluster/bundle/1
[10:21] <flxfoo> thanks stickupkid
[11:43] <manadart> stickupkid: Can you look at https://github.com/juju/juju/pull/11330 ? I still need to add some tests, but I expect that to be the only change from here.
[11:43] <stickupkid> manadart, yeah, I'll get to it in a bit, fighting microstack atm
[11:51] <stickupkid> manadart, did you ever get this bootstrapping to microstack?
[11:51] <stickupkid> ERROR failed to bootstrap model: cannot start bootstrap instance in availability zone "nova": cannot run instance:  with fault "No valid host was found. "
[11:51] <manadart> stickupkid: Nope.
[11:52] <manadart> stickupkid: You import the image, add the stream and so on?
[11:52] <stickupkid> i'll show you one sec
[11:53] <hml> stickupkid:  that’s a typical can’t create the host error msg from o7k
[11:53] <stickupkid> manadart, https://github.com/juju/juju/pull/11253/files#diff-a86032d7aeab77815b81f23cdfb46921
[11:53] <hml> stickupkid: you’d have to look at the nova logs to find the cause
[11:53] <stickupkid> yay
[11:53] <hml> stickupkid:  not enough memory or some random failure
[11:53] <hml> stickupkid: it’s the catch all error message for shit didn’t work…. :-D
[11:54] <stickupkid> \o/
[11:57] <stickupkid> hml, seems legit actually https://paste.ubuntu.com/p/rGBWN5w4B7/
[12:03] <hml> stickupkid: usually there’s a better msg burried in the logs.  there are many nova logs, trying to remember which one specifically.
[12:04] <hml> i don’t see the cause of the “NoValidHost” in the pastebin
[12:05] <hml> stickupkid: maybe this is the ptr to the issue “nova/scheduler/manager.py", line 156, in select_destinations”
[12:05] <hml> they left the reason empty just to be helpful.  ;-D
[12:20] <hml> stickupkid: i think i found my permission issue… the ugprade wsa handled by the machine agent… and the errors where in the unit agent….
[12:20] <hml> the machine agent handled things okay…
[12:21] <hml> now the question is now NOT to run in the unit agent.
[12:50] <stickupkid> damn it
[12:50] <stickupkid> it's multpass :(
[12:50] <stickupkid> i <- o/
[12:51] <stickupkid> I'm giving it 12G of ram, how much more does it want! what does it think it is, mongo!
[12:52] <rick_h_> lol, we have a joke that our dog can count "1..2...all of them". So when she sees bones she says she has "1..2...all of them"
[12:52] <rick_h_> sounds like multi-pass wants "1...2..all of them" memory :)
[12:53] <stickupkid> my tests pass locally though, so that's a bonus
[13:16] <zeestrat> We usually say that it's like a garage. It's either new or full ;)
[13:30] <hml> achilleasa_:  responded to a few comments.  have a read and let me know if we should chat?
[13:45] <achilleasa_> hml: thanks for the comments.
[13:46] <hml> achilleasa:  HO?
[13:46] <achilleasa> sure
[14:09] <stickupkid> manadart, CR done
[14:09] <manadart> stickupkid: Thanks.
[14:09] <stickupkid> hml, it's not memory that's an issue
[14:09] <stickupkid> Memory usage:   5.4G out of 23.5G
[14:09] <hml> stickupkid:  can you make an instane with a bigger flavor?
[14:10] <hml> stickupkid: processor?
[14:10] <stickupkid> given it 12 cores
[14:11] <stickupkid> OT: I do like this cattle setup - I can totally just chuck everything out and start again
[14:22] <stickupkid> hml, DISK SPACE
[14:22] <stickupkid> not getting that time back - but it's almost working now
[14:22] <hml> stickupkid: that was my next guess, some sort or resource issue usually
[14:23] <stickupkid> that's probably the worst error message I've ever seen
[14:23] <hml> stickupkid: ha!  I like “Error” too
[14:24] <hml> stickupkid: had to track that down in juju.  or was it “Error not found”. and that was it printed
[14:24] <hml> stickupkid: but agreed NoValidHost is a pita
[14:24] <stickupkid> proof it works https://paste.ubuntu.com/p/Rxb8hxqKYk/
[14:24] <stickupkid> juju needs to shut the f'up about image metadata though, that's just not right
[14:25] <stickupkid> I'm so happy
[14:26] <stickupkid> right, how do I shut juju - mission 2
[14:27] <stickupkid> hml, up there with this error message https://content.spiceworksstatic.com/service.community/p/post_images/0000291424/5a6917f0/attached_image/task_failed_successfully.png
[14:27] <hml> stickupkid: alol
[14:52] <stickupkid> did anyone test the new python static analysis - i'm getting loads of syntax errors
[16:05] <flxfoo> stickupkid: sorry to disturb, you mentionned using the redis-cluster charm... when I tried to use it (within cakephp), I gave the ip of the leader, but I have an exception saying that another member "MOVED"... Do I have to check a special configuration for the cluster?
[16:05] <flxfoo> or cakephp
[16:16] <stickupkid> flxfoo, you need to tell cakephp to follow the redirects, I'm unsure how you'd do that without digging
[18:34] <achilleasa> rick_h_: I got the relation-created bits working with my refactoring changes. Still have to type the PR description and QA steps (quite a few scenarios) but if you want to try it out it's here: https://github.com/juju/juju/pull/11341 (you can try juju deploy ./testcharms/charm-repo/quantal/all-hooks -n 2)
[18:37] <rick_h_> achilleasa:  awesome ty
[19:07] <flxfoo> stickupkid: ok, so that will not be out of the box then... thanks