[00:01] <axw> babbageclunk: happy new year. skip standup today? it'll just be us
[00:01] <axw> last week I did bugs, this week I do bugs
[00:01] <babbageclunk> axw: happy new year to you too!
[00:01] <babbageclunk> ha
[00:01] <babbageclunk> yeah, I'm fine to skip - last week I did tramping
[00:02] <axw> nice :)
[00:02] <babbageclunk> today I'm doing upgrade testing
[00:29] <thumper> axw: wondering if you could comment on https://bugs.launchpad.net/juju/+bug/1738993
[00:29] <mup> Bug #1738993: ERROR creating hosted model: model already exists <juju:New> <https://launchpad.net/bugs/1738993>
[00:29] <thumper> I'm not sure where to start
[00:29] <thumper> it is vmware I think
[00:29] <axw> okey dokey
[00:29] <thumper> thanks
[00:29] <thumper> babbageclunk: hey
[00:30] <thumper> babbageclunk: do you have some time to talk about audit logging?
[00:50] <babbageclunk> thumper: oops, just saw this now. yes, I do if you're still free!
[00:50] <babbageclunk> in 1:1?
[00:56] <thumper> babbageclunk: I'm about to chat to anastasiamac, after that?
[00:57] <babbageclunk> thumper: yup yup
[01:32] <thumper> axw: good find on the region name dots
[01:33] <axw> thanks
[01:36]  * thumper looks for babbageclunk
[01:36] <babbageclunk> thumper:
[01:36] <babbageclunk> oops hi
[01:36] <thumper> babbageclunk: 1:1 HO?
[01:37] <babbageclunk> yup
[11:12] <kjackal> Hi Juju devs. I had to prune orphaned transactions of a subordinate charm. That seems to fixed the mongo db issues but now the charm is unresponsive with  https://pastebin.ubuntu.com/26346175/ on its logs
[11:12] <kjackal> the agent is in failing state
[11:13] <kjackal> ani ideas how to get out of this situation
[11:13] <kjackal> I do not mind redeploying the subordinate, but I do not wand to kill the principal charm
[19:25] <thumper> kjackal: you could try 'juju resolved --no-retry'
[19:25] <thumper> to get the unit out of the error state
[19:26] <kjackal> thumper: the unit is not in error state, it is blocked and the agent failing
[19:26] <kjackal> let me tryit
[19:26] <thumper> hmm..
[19:27] <thumper> how did it get into this state?
[19:27] <kjackal> https://pastebin.ubuntu.com/26348573/
[19:28] <kjackal> There was a corruption in mongodb. At that point all operations were failing, so I tried to pruneorhaned transactions
[19:29] <kjackal> and this is where we are now
[19:29] <kjackal> Do not want to redeploy the prometheus-gateway since it hosts all these other subordinates
[19:30] <kjackal> If I could get rid of the failing subordinate it would have been great
[19:34] <kjackal> thumper: could I go in the node and stop some of the running services so as to disable the failing agent?
[20:03] <thumper> kjackal: sorry, had a call
[20:05] <thumper> kjackal: I *think* we could fix it by editing a file on disk on that agent
[20:05] <thumper> effectively it seems that it is stuck in a loop, retrying, but not able to make progress due to the database not being in a state that matches the unit
[20:05] <thumper> the uniter on the machine has a FSM for working out what to do
[20:06] <thumper> I think we might be able to nudge that