[00:03] wallyworld: I just had a test, juju bootstrap to microk8s, we have rbac resource created. and https://pastebin.ubuntu.com/p/5m5Bj9Ytg2/ [00:03] kelvinliu: did you clear out your local clouds.yaml first? [00:04] but the rbac resources won't be deleted if the ctrl is killed/destroyed. there is card for this https://trello.com/c/KrqgRMeH/951-remove-mcirok8s-credential-in-destroy-kill-controller [00:04] and credentials.yaml [00:04] yes, I did [00:04] and your cred.yaml is missing the rbac-id? [00:06] the rbac-id for microk8s is `microk8s` [00:06] it's not a uid [00:06] because we have only one buildin k8s cloud [00:07] and yet when i run cat microk8s.config | juju add-k8s foo it does put in a rbac-id [00:07] so it's not consistent [00:07] and there's also now microk8s clusters [00:07] if it's add-k8s, then u can have many microk8s cloud but it's not a buildin cloud anymore [00:07] so you could access externally like any other k8s cloud [00:08] that is true [00:10] the `juju-credential-microk8s` rbac resources are not cleared after controller is destroyed because the credential is not deleted which is a todo as the card said [00:27] wallyworld: sorry, was out having lunch [00:28] quick k8s questions with units [00:28] no worries, just wanted to touch base on a charm issue [00:28] do we call hostDestroyOps? [00:28] in standup now for reals [00:28] sorry [00:28] destroyHostOps [00:28] i'll check after standup [00:28] ack [00:32] wallyworld: actually when you're done I should talk with you about k8s models and the all watcher [00:32] I'm in the code now, and I'm worried [00:35] actually less worried now [00:35] * thumper keeps reading [00:35] phew [00:36] ? [00:37] thumper: wanna jump in standup? [00:37] yeah [00:37] omw [03:35] ugh... some of our tests are awful [03:38] who would have thought that a simple denormalisation would trigger so many bad tests [03:38] ok... I wouldn't have thought that [03:39] but in many places, we were creating subordinates for units that weren't assigned to machines [03:39] which is kinda bad [05:01] some of our tests are incredibly contrived situations that can never happen in the normal flow of real deployments [05:01] like opening a port on a unit that isn't assigned to a machine [05:01] that'll never happen [09:57] hi all [09:57] is there any charm for a redis cluster running for bionic? [10:21] flxfoo, https://jaas.ai/u/omnivector/redis-cluster/bundle/1 [10:21] thanks stickupkid [11:43] stickupkid: Can you look at https://github.com/juju/juju/pull/11330 ? I still need to add some tests, but I expect that to be the only change from here. [11:43] manadart, yeah, I'll get to it in a bit, fighting microstack atm [11:51] manadart, did you ever get this bootstrapping to microstack? [11:51] ERROR failed to bootstrap model: cannot start bootstrap instance in availability zone "nova": cannot run instance: with fault "No valid host was found. " [11:51] stickupkid: Nope. [11:52] stickupkid: You import the image, add the stream and so on? [11:52] i'll show you one sec [11:53] stickupkid: that’s a typical can’t create the host error msg from o7k [11:53] manadart, https://github.com/juju/juju/pull/11253/files#diff-a86032d7aeab77815b81f23cdfb46921 [11:53] stickupkid: you’d have to look at the nova logs to find the cause [11:53] yay [11:53] stickupkid: not enough memory or some random failure [11:53] stickupkid: it’s the catch all error message for shit didn’t work…. :-D [11:54] \o/ [11:57] hml, seems legit actually https://paste.ubuntu.com/p/rGBWN5w4B7/ [12:03] stickupkid: usually there’s a better msg burried in the logs. there are many nova logs, trying to remember which one specifically. [12:04] i don’t see the cause of the “NoValidHost” in the pastebin [12:05] stickupkid: maybe this is the ptr to the issue “nova/scheduler/manager.py", line 156, in select_destinations” [12:05] they left the reason empty just to be helpful. ;-D [12:20] stickupkid: i think i found my permission issue… the ugprade wsa handled by the machine agent… and the errors where in the unit agent…. [12:20] the machine agent handled things okay… [12:21] now the question is now NOT to run in the unit agent. [12:50] damn it [12:50] it's multpass :( [12:50] i <- o/ [12:51] I'm giving it 12G of ram, how much more does it want! what does it think it is, mongo! [12:52] lol, we have a joke that our dog can count "1..2...all of them". So when she sees bones she says she has "1..2...all of them" [12:52] sounds like multi-pass wants "1...2..all of them" memory :) [12:53] my tests pass locally though, so that's a bonus [13:16] We usually say that it's like a garage. It's either new or full ;) [13:30] achilleasa_: responded to a few comments. have a read and let me know if we should chat? [13:45] hml: thanks for the comments. === achilleasa_ is now known as achilleasa [13:46] achilleasa: HO? [13:46] sure [14:09] manadart, CR done [14:09] stickupkid: Thanks. [14:09] hml, it's not memory that's an issue [14:09] Memory usage: 5.4G out of 23.5G [14:09] stickupkid: can you make an instane with a bigger flavor? [14:10] stickupkid: processor? [14:10] given it 12 cores [14:11] OT: I do like this cattle setup - I can totally just chuck everything out and start again [14:22] hml, DISK SPACE [14:22] not getting that time back - but it's almost working now [14:22] stickupkid: that was my next guess, some sort or resource issue usually [14:23] that's probably the worst error message I've ever seen [14:23] stickupkid: ha! I like “Error” too [14:24] stickupkid: had to track that down in juju. or was it “Error not found”. and that was it printed [14:24] stickupkid: but agreed NoValidHost is a pita [14:24] proof it works https://paste.ubuntu.com/p/Rxb8hxqKYk/ [14:24] juju needs to shut the f'up about image metadata though, that's just not right [14:25] I'm so happy [14:26] right, how do I shut juju - mission 2 [14:27] hml, up there with this error message https://content.spiceworksstatic.com/service.community/p/post_images/0000291424/5a6917f0/attached_image/task_failed_successfully.png [14:27] stickupkid: alol [14:52] did anyone test the new python static analysis - i'm getting loads of syntax errors [16:05] stickupkid: sorry to disturb, you mentionned using the redis-cluster charm... when I tried to use it (within cakephp), I gave the ip of the leader, but I have an exception saying that another member "MOVED"... Do I have to check a special configuration for the cluster? [16:05] or cakephp [16:16] flxfoo, you need to tell cakephp to follow the redirects, I'm unsure how you'd do that without digging [18:34] rick_h_: I got the relation-created bits working with my refactoring changes. Still have to type the PR description and QA steps (quite a few scenarios) but if you want to try it out it's here: https://github.com/juju/juju/pull/11341 (you can try juju deploy ./testcharms/charm-repo/quantal/all-hooks -n 2) [18:37] achilleasa: awesome ty [19:07] stickupkid: ok, so that will not be out of the box then... thanks