[00:47] <wallyworld> babbageclunk: sorry! cut you off
[00:55] <veebers> babbageclunk: exciting!
[00:56] <babbageclunk> wallyworld: No worries!
[00:56] <babbageclunk> I don't think it was anything exciting.
[00:57] <babbageclunk> veebers: i know right!
[01:01] <veebers> babbageclunk: hopefully everything was in order
[01:02] <babbageclunk> veebers: yup, it looked better than I remembered it!
[01:05] <veebers> that's a good sign!
[06:35] <wallyworld> axw: should i hold off reviewing the raft cluster pr pending changes to use the (single) address as the node id? I think that's what we agreed right?
[06:37] <axw> wallyworld: yeah, and the rafttransport PR I guess
[06:37] <wallyworld> yep
[06:40] <axw> wallyworld: thanks for the review. can you please see my reply before I merge
[06:40] <wallyworld> sure
[06:41] <wallyworld> axw: ok, no worries, lgtm
[06:41] <axw> thanks
[16:56] <balloons> hml, do you know who is working on retooling backup / restore (if anyone?)?
[19:16] <balloons> vern, I'm still not sure who's working backup / restore fixes, if anyone atm fyi
[19:17] <vern> hml was doing some work on backup/restore. not sure if it was just bug-fixes
[19:18] <hml> vern: agprado and i are going to take a look at back/restore shortly - some bug fixes but review overall
[19:18] <hml> vern: is there somewhere we need to look?
[19:19] <vern> hml: I'm going to write/extend the qa test. want to be sure to have a good plan and to anticipate anything that isn't working right
[19:20] <hml> vern: okay - i’ll have more info next week
[19:36] <balloons> hml, specifically I think the plan was to simplify things right? Or will you focus on bugfixes?
[19:38] <hml> balloons: yes - simplify if we can and try to improve the experience
[19:47] <balloons> vern, try and prep your thoughts on the existing state of the test for standup if you could.
[19:48] <vern> will do
[22:51] <rogpeppe1> anastasiamac: hiya! thanks for working on https://bugs.launchpad.net/juju/+bug/1745231 so quickly! I just sent a reply to that issue.
[22:51] <mup> Bug #1745231: removed model can cause allmodelwatcher to die permanently <juju:In Progress by anastasia-macmood> <juju 2.3:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1745231>
[22:58] <anastasiamac> rogpeppe1: nws - thnx for looking into it :)
[22:58] <rogpeppe1> anastasiamac: it's been bugging us for ages!
[22:59] <rogpeppe1> anastasiamac: luckily the last time it happened i got them to set the logging level to INFO
[22:59] <rogpeppe1> anastasiamac: so i finally saw the problem
[22:59] <anastasiamac> rogpeppe1: yeah, it has been a hard road... m glad u were patient and insightful
[23:00] <rogpeppe1> anastasiamac: BTW i think it would be nice if more places that logged errors logged the error details too
[23:00] <rogpeppe1> anastasiamac: np, it was useful to have got there in the end :)
[23:01] <rogpeppe1> anastasiamac: to get a timely notification when the problem started happening i built a nice prometheus-monitoring tool that shows "jimm: ok" at the top of my screen (in the indicator bar).
[23:01] <rogpeppe1> anastasiamac: i'm sure i can do more interesting things with that :)
[23:02] <anastasiamac> rogpeppe1: that sounds amazing \o/ anything that can be shared with us so that we can keep an eye on it too?
[23:03] <rogpeppe1> anastasiamac: well, indicator-sysmonitor makes it easy to put custom stuff on the indicator panel
[23:04] <rogpeppe1> anastasiamac: and the little localhost server to monitor prometheus is this code: http://paste.ubuntu.com/26454773/
[23:05] <anastasiamac> rogpeppe1: niiice
[23:05] <rogpeppe1> anastasiamac: then the script called by indicator-sysmonitor is just (more-or-less) bhttp http://localhost:12468/status
[23:10] <anastasiamac> rogpeppe1: thnx !