[04:29] <babbageclunk> wallyworld_: is this the raft bug you were talking about? https://bugs.launchpad.net/juju/+bug/1880751
[04:29] <mup> Bug #1880751: flapping HA controller machine with lease issue on 2.8-rc2 on GCP <juju:New> <https://launchpad.net/bugs/1880751>
[04:44] <wallyworld_> babbageclunk: yeah
[04:52] <babbageclunk> Could the events be getting blocked by the error being logged by the hubwatcher?
[04:52] <babbageclunk> (I mean the lease operations)
[04:56] <wallyworld_> babbageclunk: i haven't looked at it at all so have no insight
[05:08] <babbageclunk> wallyworld_: ok, I think that's the issue from reading the hubwatcher code - it waits up to 10s for the other end to accept the event but all the lease ops queued after that time out after 5s.
[05:09] <babbageclunk> Don't understand why the machineUnitsWatcher isn't unwatching though
[05:10] <wallyworld_> babbageclunk: is there a suggestion as to how to fix?
[05:11] <wallyworld_> manadart: would appreciate a review on https://bugs.launchpad.net/juju/+bug/1880797 to fix bug 1880797 for inclusion in 2.7.7 (customer issue)
[05:11] <mup> Bug #1880797: model migration fails from upgraded 2.6 model to 2.7 controller <juju:In Progress by wallyworld> <https://launchpad.net/bugs/1880797>
[05:12] <babbageclunk> well, the fix would be to make sure machineUnitsWatcher always unwatches the ids it watches
[05:12] <mup> Bug #1880797: model migration fails from upgraded 2.6 model to 2.7 controller <juju:In Progress by wallyworld> <https://launchpad.net/bugs/1880797>
[05:12] <babbageclunk> ;)
[05:12] <wallyworld_> babbageclunk: i need to read the code, can you point me to the file with the watcher?
[05:14] <babbageclunk> yup yup - the watch call is at state/watcher.go:2455
[05:16] <wallyworld_> babbageclunk: HO?
[05:18] <babbageclunk> yup just getting dressed
[05:18] <wallyworld_> :-)
[05:27] <tlm> quick PR if anyone has a few seconds https://github.com/juju/juju/pull/11629
[05:57] <hpidcock> tlm: approved
[07:33] <manadart> wallyworld_: Will look in a bit.
[07:48] <wallyworld_> ty
[08:07] <elox> Is there something wrong with the charmstore? I'm just released my next revision of nextcloud charm, when I try to deploy it, it can't seem to find it.
[08:13] <elox> https://discourse.juju.is/t/charmstore-releases-takes-a-long-time-to-appear/3123
[11:41] <stickupkid> manadart, The last piece before I tackle the machiner API https://github.com/juju/juju/pull/11631
[11:42] <manadart> stickupkid: Yep.
[12:25] <rick_h> stickupkid:  manadart either of you free to give me a demo hand with an issue please?
[12:26] <manadart> rick_h: What's up?
[12:27] <rick_h> manadart:  agent stuck in "initializing" and restarted it but blanking on unblocking it
[12:27] <rick_h> https://pastebin.canonical.com/p/SGBHSnfQY6/
[12:28] <rick_h> might be doomed I guess https://pastebin.canonical.com/p/nWKnmXyhfK/
[12:32] <rick_h> the other unit had a different error, trying restarting but not sure how I can kick it as it's not in an error state to retry https://pastebin.canonical.com/p/vMj23Zfp5z/
[12:37] <manadart> rick_h: Not doomed in the first case. Can the machine access the controller? When doing the fallback connect, if you get any error such as a time-out, it appears to bug out.
[12:39] <rick_h> manadart:  yes, the machine 1 is a controller machine
[12:40] <rick_h> it's a monitoring setup on a controller
[12:40] <rick_h> the fact that it thinks it's a bad auth and unusable I wasn't sure how to get it to retest that assumption
[12:40] <rick_h> I'm trying to get rid of the telegraf but it's also not removing as it's stuck in a bad state that Juju won't try to touch it, it seems
[12:41] <rick_h> of course this is my controller so just blasting machines is :( as it tears the rest of the demo work down
[12:41] <manadart> rick_h: Yes, I see.
[12:45] <rick_h> ok, going to tear down. I need to get another build started asap
[12:53] <manadart> rick_h: Sorry; stumped.
[12:53] <rick_h> manadart:  yea, all good just bummed, second controller to have to diaf :/
[17:12] <rick_h> petevg:  howdy, just wanted to check in if you get time for demo sync
[17:15] <petevg> rick_h: in a meeting now ... want to sync up in fifteen minutes?
[17:18] <rick_h> petevg:  rgr
[17:30] <petevg> rick_h: I'm in the juju daily.
[17:31] <rick_h> petevg:  sorry, on calls on my end my bad, timne slipped away
[17:33] <petevg> rick_h: no worries. I'm going to go get some lunch. I've got a MicroStack running Kubernetes for you, but as I went to share the controller with you, I realized that it's all locked inside MicroStack's private little network. I'm working on rewiring said network so that you can get to it from outside the box. (Doing a test run with some local stuff first, because it's easy to kill network access, and I don't want to start
[17:33] <petevg> all over again.)
[18:31] <pmatulis> why is it that even after this command: 'juju model-default default-space=public-space' i still get deploy errors like:
[18:31] <pmatulis> << no obvious space for container "0/lxd/0", host machine has spaces: "public-space", "undefined" >>
[19:26] <rick_h> petevg:  did you run across this at all? https://pastebin.canonical.com/p/t6nW9jykPK/ any hints on what it's saying?
[19:29] <petevg> rick_h: I did not run across that message. Interesting.
[19:30] <petevg> rick_h: I did most of my testing w/ three keystone units. I'm 99% certain that I did a final deploy w/ the one unit bundle I sent you, though ...
[19:37] <rick_h> petevg:  hmmn ok, tear down and try again
[19:38] <petevg> cool cool
[19:38] <rick_h> petevg:  almost have the CMR version of the bundle going wheeee
[19:38] <petevg> Woot!
[19:38] <rick_h> though I should dress it up some more with telegraf/etc but one thing at a timne
[19:44] <rick_h> petevg:  hmmm, I expect this isn't a CMR ready application ugh
[19:49] <rick_h> ok, different tact
[20:47] <rick_h> petevg:  ok, CMR is ready. I'm goinmg to walk the dog but will you be around in 30 to sync on the openstack/etc?
[20:47] <rick_h> petevg:  if the k8s kills it just leave it out
[20:48] <petevg> rick_h: congrats! I've got a release standup @ 17:30, but I'm around before and after then.
[20:49] <rick_h> petevg:  ok cool ty, will ping when back from the walk