[00:18] thumper: babbageclunk will coprrect me, but the BootstrtapRaft() bit is called during bootstrap to set up the cluster; i can't recall exaxctyl what it stores [00:18] It stores the raft configuration into the raft log [00:18] so for a running system being upgraded, we don't bootstrap [00:18] so miss that api call [00:19] ah... [00:19] ok [00:19] how big a change is the k8s api? [00:20] not that big - need to return a struct instead of a string for getting info to set up a pod [00:20] most of the attrs in 2.4 will be empty [00:20] that's fine [00:20] get it done [00:20] will do [00:21] need to pave way for storage in 2.5 [01:04] Is there a way to get 2.2.9 working with snap installed lxd? [01:05] veebers: not sure tbh, may require the apt version [01:05] thumper: here's that refactoring pr for the api params https://github.com/juju/juju/pull/8844 [01:05] wallyworld: ah apt lxc not apt juju that makes sense ^_^ [01:06] shoot, I only recently migrated from apt installed lxd to snap installed [01:06] joy [01:08] going to spin up an ec2 instance and use that as the host ^_^ [01:08] that works [01:23] wallyworld: what's the right way to get all controller machines from state? [01:28] wallyworld: ooh, looks like State.ControllerInfo has it - and then I guess I can ask each machine for their address. [01:44] babbageclunk: i think so, let me have a look [01:44] yeah that works but i thought we had another way, i'll poke around [01:46] I'm struggling to work out how to select the right address for each machine. [01:46] Trying to follow how the peergrouper does it is doing my head in. [01:48] I could do it sort-of-heuristically using the api addresses from the agent config: when I find a machine with an address that matches an api address then I can match them up [01:49] wallyworld: thumper: unskipping test PRs for reviews - 8840:8843 PTAL? [01:50] babbageclunk: there's got a be a better way [01:50] anastasiamac_: I'd like to see the premerge check passing before approving [01:50] i can poke around [01:50] wallyworld: thanks [01:51] thumper: only 1 out of 4 had an un-related failure... the other 3 passed... [01:54] babbageclunk: did we need a HO? [01:54] wallyworld: yeah, might be good - in standup? [01:54] sure [01:55] anastasiamac_, wallyworld: I'd prefer to push back on the test changes, just because it is introducing more churn without much benefit at this stage [01:55] I'd prefer them to land post 2.4.0 release [02:02] thumper: ack [02:03] thumper: in fact, m happy to unskip them in develop branch instead [03:16] wallyworld: you think the "--agent-stream" addition to juju-upgrade is worth putting in 2.4 release notes? [03:17] veebers: i think so yeah [03:17] sweet, I'll make is so [03:18] as it's a usability improvement [03:47] I'm being dense, how can I change an agent stream for an upgrade? (juju upgrade-juju -m controller --agent-version 2.4-rc2 --agent-stream=proposed won't work because the controller doesn't support that ^_^) [03:51] ah, rtfm helps; juju model-config -m controller agent-stream=devel [05:00] babbageclunk: how goes the patch? [05:01] wallyworld: just testing it now - think I have a problem with timing start up, might need to have the raft worker start later. [05:01] you can gate on !upgrading... you doing that? [05:03] yup [05:10] babbageclunk: so this timing issue is not new then [05:10] Oh, I mean, the raft worker wasn't gated on upgrade, but now it needs to be, so I'm doing it now [05:19] wallyworld: you have a quick sec? https://pastebin.canonical.com/p/SvjPwMpgHB/ I set the agent-stream, but juju-upgrade without an explicit version (i.e. 2.4-rc2) it chooses 2.3-rc2 (from a base 2.2.9 install). That's expected behaviour right? (If I understand the upgrade-juju help correctly) [05:20] i think it goes +1 by default, can check [05:22] veebers: first look - the code seems to say it will upgrade to the version of the juju client if not specified [05:23] wallyworld: I think that's right, just re-reading the help docs) [05:23] wallyworld: ack, that matches my expectations and what I'm seeing. Thanks :-) [05:23] good, docs match code :-) [05:23] even better ^_^ [12:20] hello, please anyone to help: trying to deploy and getting this -> no obvious space for container "0/lxd/0", host machine has spaces: "default", "overlay", "provider", "storage" <-- any idea? [12:21] enrico_: check out https://docs.jujucharms.com/2.3/en/network-spaces and then the binding syntax [12:22] enrico_: basically juju is saying you've got 4 spaces and we're not sure how to setup the container because we don't know which network interfaces need to be setup in which way to make sure the application is on the right network [12:28] how's the hangover rick_h_ ? [12:29] magicaltrout: :) drink responsbily [12:29] rick_h_: thanks you... I've tried to follow the doc.. Juju recognizes the 4 spaces defined in MAAS and in to my bundle I've binded them using the example here https://jujucharms.com/nova-cloud-controller/ [12:29] enrico_: ok, so the nova-cloud-controller is what you're putting in 0/lxd/0? [12:31] rick_h_: yes but I've got the same error for the others services I've spread over others containers! so something wrong with my syntax ? [12:32] enrico_: maybe, juju isn't liking the binding and fails because it doesn't know what to do [12:33] rick_h_: please look at https://pastebin.com/3sXEFgtV as example of my conf [12:33] rick_h_: Can I do that: mgt: default ? when my space is named default ? [12:33] enrico_: right, this is application binding so that the endpoint on the application is bound to the space. [12:33] enrico_: looking for example [12:35] GM I keep getting the following in three OSDs that have been rebooted. Two of the OSDs are stuck in upgrading state. The ceph cluster is operational. [12:36] unit-ceph-osd-0: 08:09:20 DEBUG unit.ceph-osd/0.config-changed Error ENOENT: key 'osd_osd5_luminous_done' doesn't exist [12:36] unit-ceph-osd-0: 08:09:20 INFO unit.ceph-osd/0.juju-log osd5 is not finished. Waiting [12:36] unit-ceph-osd-0: 08:09:20 DEBUG unit.ceph-osd/0.config-changed Error ENOENT: error obtaining 'osd_osd5_luminous_start': (2) No such file or directory [12:36] unit-ceph-osd-0: 08:09:20 INFO unit.ceph-osd/0.juju-log Monitor config-key get failed with message: b'' [12:36] unit-ceph-osd-0: 08:09:21 INFO unit.ceph-osd/0.juju-log waiting for 16 seconds [12:36] enrico_: so I'm looking at https://jujucharms.com/glance/265 and I don't see any endpoint called mgt [12:36] enrico_: so I think the thing is getting those bindings to be the real endpoints on the charm bound to the space [12:37] rick_h_: oooook so the endpoints names are predefined with the charms and I have to use them as they are ! Ok make sense ! Thanks you ! going to try [12:38] enrico_: right, so that the juju can help the charm tell it what data it knows about needs to go over what networks [12:38] enrico_: e.g. ceph has a data plane for ceph->ceph communication that something like keystone will not [12:39] rick_h_: yes okok I was missing that key thanks you :) [12:39] enrico_: np, good luck! === Guest53 is now known as acss [12:41] Sorry entered as Guest53 can anyone tell me why I would get the key osd_osd5_luminous_done [12:41] from above in the chat [12:42] This seems to be created by juju [13:57] Hello I have three OSDs and it appears there is some way juju is references keys to the wrong hosts. On hostname osd6 I keep getting an error unit.ceph-osd/0.config-changed Error ENOENT: key 'osd_osd5_luminous_done' doesn't exist [13:57] on hostname osd5 I get unit.ceph-osd/0.config-changed Error ENOENT: key 'osd_osd4_luminous_done' doesn't exist [13:58] Can anyone please tell me the direction to look into to find the why this is happening? [13:59] acss: have you asked in #openstack-charms? [14:00] No I can try that thank you