[00:59] <hpidcock> wallyworld ^
[01:20] <wallyworld> hpidcock: looking
[01:20] <wallyworld> sorry, was distracted
[01:21] <wallyworld> done
[01:21] <hpidcock> wallyworld: danke
[01:21] <wallyworld> bitte schoen
[01:22] <hpidcock> wallyworld: missed this one https://github.com/juju/lumberjack/pull/3
[01:23] <wallyworld> done
[01:42] <hpidcock> wallyworld: can you create a v2 branch from master on https://github.com/juju/mgo and push it please. I don't have write access
[01:43] <wallyworld> ok
[01:43] <hpidcock> thankyou
[01:45] <wallyworld> hpidcock: done
[01:45] <hpidcock> awesome
[01:45] <hpidcock> thanks
[01:51] <thumper> wallyworld: are you looking at the release blocker or shall i?
[01:52] <wallyworld> thumper: babbageclunk is. and i don't agree it's a release blocks. 1. not regressions, 2. TBA not juju
[01:52] <thumper> wallyworld: I may take a look at logs and sync with babbageclunk then
[01:52] <babbageclunk> yup yup
[01:52] <wallyworld> sgtm, ty
[01:53] <babbageclunk> I can see where everything starts going wrong but nothing that indicates why - just seems like the disk stops
[01:53] <babbageclunk> would be good if crashdump captured engine reports and goroutines
[01:58]  * thumper crosses fingers
[02:02] <thumper> babbageclunk: can we jump in a HO to discuss this bug?
[02:02] <babbageclunk> yup yup
[02:42] <timClicks> how do charm (frameworks) access the new juju persistent storage available between hook executions?
[02:52] <wallyworld> timClicks: no idea, john would know
[03:32] <thumper> timClicks: StoredState object
[03:33] <thumper> wallyworld: can you join me and babbageclunk?
[03:33] <wallyworld> sure
[03:33] <wallyworld> where?
[03:33] <thumper> https://meet.google.com/rzk-sipb-yna
[05:31] <babbageclunk> thumper: while writing stuff up I got very confused about the "fortress worker shutting down" errors - why are we seeing those in the log?
[05:31] <thumper> that is because the controller is no longer responsible for the model
[05:32] <thumper> the fortress worker just happened to be a worker stopped that caused all the others to stop too
[05:32] <thumper> that is what my thinking is
[05:32] <thumper> because the model workers don't stop all the workers
[05:32] <thumper> this is a different issue
[05:32] <thumper> most are based around migration
[05:33] <thumper> and the migration is based around if responsible
[05:40] <babbageclunk> thumper: but the lease can only expire if the clock updater can put time updates through, right?
[05:40] <thumper> I don't know
[05:43] <babbageclunk> Oh no, the isResponsible worker would crash if it got a timeout trying to extend the lease, so I can see that happening.
[05:43] <thumper> babbageclunk: http://paste.ubuntu.com/p/F73c29DVcB/ and https://paste.ubuntu.com/p/rVycBmX8PF/
[05:44] <babbageclunk> oh nice!
[05:44] <thumper> although the report showing "agent: true" is weird
[05:45] <thumper> it should be the other unit name
[05:45] <thumper> however, basics working
[08:33] <stickupkid> manadart, are we going to add spaces cmds to pylibjuju?
[08:36] <manadart> stickupkid: I think, yes.
[08:57] <manadart> stickupkid: https://github.com/juju/juju/pull/11466
[09:00] <stickupkid> manadart, k, will check now... it seems strange that setting a SetCharmURL isn't a requirement of logic further down. i.e. it does seem flakey logic for it to be like this
[09:04] <stickupkid> I need to back port my github action fix to 2.7
[09:10] <manadart> stickupkid: It's because in-situ, the steps happen independently. It is the local uniter that sets the charm URL at deploy time.
[10:06] <stickupkid> manadart, https://github.com/juju/juju/pull/11468
[10:08] <manadart> stickupkid: Approved. When it lands, I will add to https://github.com/juju/juju/pull/11467.
[10:08] <stickupkid> manadart, ah, nice cheers
[10:38] <manadart> stickupkid: Quick HO?
[10:38] <stickupkid> not got any video, but sure
[14:12] <stickupkid> hml, I fixed the tmp directory last month actually -> https://github.com/juju/juju/commit/2912ff39fab9ecd9a0de211cede78a93d35153c4
[14:45] <manadart> stickupkid or hml: https://github.com/juju/juju/pull/11470
[15:09] <stickupkid> manadart, I think github have tightened their restrictions on what can run inside a container now
[15:09] <manadart> stickupkid: *Sigh*
[15:10] <stickupkid> manadart, I've fixed the snapcraft github action and it's also reporting the same issue https://github.com/juju/juju/pull/11469
[16:12] <stickupkid> manadart, hml CR at somepoint -> https://github.com/juju/juju/pull/11412
[16:12] <stickupkid> this one takes time :(
[16:29] <hml> stickupkid: hrmm… wonder why i hit it last week then.  will have to investigate on my end
[21:28] <babbageclunk> thumper: you were talking about going through mongo bugs for #1873482 - anything useful? Should I keep doing that?
[21:28] <mup> Bug #1873482: [2.7.6] Controller reporting model dropped when no action taken on model <cdo-release-blocker> <juju:New> <https://launchpad.net/bugs/1873482>
[21:29] <thumper> I didn't find anything, but there were heaps
[21:30] <babbageclunk> or is it enough to say that we don't think this is new, it probably shouldn't be a blocker?
[21:30] <thumper> babbageclunk: can you join the release call?
[21:30] <babbageclunk> sure
[23:00] <tlm> wallyworld, kelvinliu do we make a service account or rbac permissions for a controller to talk to kubernetes ?
[23:02] <kelvinliu> tlm: yes, the controller's credential is a RBAC SA
[23:02] <tlm> where does that SA get created because I am not seeing it
[23:02] <kelvinliu> during bootstrapping time
[23:03] <tlm> hmmm funny feeling the code isn't working
[23:03] <tlm> got a rough idea where I should look ?
[23:04] <thumper> babbageclunk: bug 1873482, is it in progress or won't fix?
[23:04] <kelvinliu> why isn't working, the rbac staff are in kube-system namespace
[23:04] <mup> Bug #1873482: [2.7.6] Controller reporting model dropped when no action taken on model <cdo-release-blocker> <juju:Triaged by 2-xtian> <https://launchpad.net/bugs/1873482>
[23:04] <babbageclunk> thumper: I'm not sure - it doesn't seem like it should be wontfix?
[23:05] <kelvinliu> tlm: caas/kubernetes/clientconfig/
[23:05] <tlm> ta
[23:11] <kelvinliu> nws