[00:54] <wallyworld> babbageclunk: it seems like that maas test is running but just takes a long time, comparing the output of a successful run to the failed one. maybe just increase the timeout, and take a look a finfolk to see if stuff needs cleaning up etc
[00:54] <babbageclunk> wallyworld: yeah, will do
[01:13] <timClicks> is this the current release template? https://discourse.jujucharms.com/t/juju-release-template/1506
[01:14] <timClicks> I would like to add a check that if a config (app/model) is added, that the docs are updated to suit
[01:21] <wallyworld> timClicks: release template is now a google doc
[01:21] <wallyworld> on the shared drive
[01:21] <timClicks> that's what I thought, which is why I asked
[01:21] <wallyworld> thumper: there's a race in the model cache, got a minute to talk about it?
[01:21] <thumper> wallyworld: yep
[01:21] <wallyworld> let's go to 1:1
[01:22] <thumper> ack
[01:57] <thumper> wallyworld: https://github.com/juju/juju/pull/11171
[02:00] <thumper> wallyworld: what does the names package use from utils?
[02:00] <wallyworld> i'd need to check again, one sec
[02:01] <wallyworld> thumper: IsValidUUIDString() for one
[02:02] <thumper> ugh
[02:02] <thumper> I've been quietly trying to kill the utils package for some time
[02:02] <wallyworld> also, MustNewUUID()
[02:02] <thumper> make smaller targeted packages that fit
[02:03] <wallyworld> i think heather is doing some work in that area
[02:03] <thumper> perhaps we should make a juju/uuid package for those
[02:03] <wallyworld> +1
[02:03] <wallyworld> heather has a PR up to take another step
[02:42] <thumper> wallyworld, babbageclunk: https://github.com/juju/juju/pull/11171
[02:42] <thumper> or anyone really...
[02:42] <babbageclunk> thumper: looking
[02:43] <hpidcock> wallyworld: did you say you did some snap work already for go1.13?
[02:44] <wallyworld> hpidcock: there's a PR with my juju snapcraft changes
[02:45] <wallyworld> https://github.com/juju/juju/pull/11138
[02:45] <hpidcock> wallyworld: thx
[04:41] <wallyworld> tlm: do you think we'll get the watcher stuff landed today?
[04:42] <tlm> wallyworld, hpidcock HO for some rubber ducking ?
[04:42] <wallyworld> tlm: just finishing a meeting, can be there soon
[04:42] <tlm> k
[04:56] <tlm> wallyworld: nvm hpidcock help me sort it out.
[04:56] <tlm> I think I am on the last failing test now so should be soonish
[04:56] <wallyworld> tlm: gr8 ok, sorry still in meeting
[04:57] <tlm> np
[05:22] <wallyworld> tlm: still all good?
[05:23] <tlm> wallyworld: yep, giving it a full run
[05:23] <tlm> sent DM
[05:23] <wallyworld> all good, i can review when ready
[06:32] <hpidcock> wallyworld: can you remember if you ever did a remote-build using your changes?
[06:33] <hpidcock> it's taking an eternity to upload to lp for me. Not sure if you had a similar experience?
[06:36] <tlm> are you sure it's still working? I had similar issues but it had failed and gone into an endless loop
[06:36] <tlm> not saying this is the case now hpidcock, just an FYI
[06:38] <hpidcock> could just be slow upload
[06:38] <hpidcock> i dunno the git repo has nothing in it
[06:41] <tlm> restart your computer 3 times ?
[06:43] <hpidcock> Three shall be the number thou shalt count, and the number of the counting shall be three.
[06:44] <tlm> the number of pringles in a stack
[06:45] <wallyworld> hpidcock: i did do a remote build to test it
[06:45] <wallyworld> it took a little while but to too long
[06:46] <hpidcock> must be my Australian internet
[06:46] <wallyworld> maybe try running on a jenkins slave?
[06:48] <hpidcock> alright for some reason, it is uploading 10gb
[06:48] <hpidcock> that might be my problem
[06:48] <wallyworld> wow, i  don't recall that much but didn't check too closely
[06:49] <wallyworld> i would hope it's not doing the entire git repo
[06:49] <hpidcock> I think it has pulled in my .git folder
[06:49] <wallyworld> it just would need the source files
[06:50] <wallyworld> if it's pulled in .git i'd raise a bug and see what sergio says
[06:51] <wallyworld> we can avoid that in the build scripts
[06:51] <tlm> wallyworld, hpidcock PR pushed, no doubt it may have some problems
[06:51] <wallyworld> looking
[06:53] <wallyworld> tlm: you'll need to address the existing comments
[06:53] <tlm> already on it :)
[07:13] <wallyworld> tlm: there's still a bunch of import grouping issues
[07:13] <tlm> yeah I put a comment on one of them
[07:13] <tlm> not sure what you where chasing v what is there ?
[07:14] <wallyworld> there's 3 blocks we use: std lib, 3rd party libs, juju/juju
[07:15] <wallyworld> so see operator.go - it has juju/juju stuff mixed in with k8s.io stuff and juju /errors etc
[07:15] <wallyworld> same with namespaces.go
[07:18] <tlm> so we consider juju/errors external for this purpose ?
[07:19] <wallyworld> yup
[07:19] <tlm> ah no worries makes sense now
[07:19] <wallyworld> anything not from juju/juju is 3rd party
[07:19] <wallyworld> confusing for sure
[07:20] <wallyworld> tlm: to allow parallel, work, i submitted a couple of comments - main thing is we already have a mock notify watcher that can be used
[07:20] <wallyworld> so no need to create a new one for k8s testing
[07:21] <wallyworld> i inlcuded a reference to the code in the comment
[07:21] <tlm> ta looking now
[07:32] <wallyworld> tlm: i'm trying tp get my head around the new k8sWatcherFn on the test suite. do we need it instead of what was there before? also, there's maybe one snafu in there?
[07:32] <wallyworld> if s.k8sWatcherFn != nil {
[07:32] <wallyworld>     w, err = s.k8sWatcherFn(informer, name, clock)
[07:32] <wallyworld> }
[07:32] <wallyworld> w, err = provider.NewKubernetesNotifyWatcher(informer, name, clock)
[07:32] <wallyworld> ...
[07:33] <wallyworld> i think we can land this tomorrow and do the stringswatcher also
[07:33] <tlm> I don't mind, almost got it swapped out for the notify one
[07:34] <wallyworld> ok
[07:34] <tlm> want me to HO to explain the above ?
[07:34] <wallyworld> sure
[09:25] <nammn_de> manadart: if you have some time later, beside it being ci day, want to go over 2 of your comments on this pr? https://github.com/juju/juju/pull/11143 As I want to make sure to understand what is meant
[09:48] <manadart> nammn_de: OK, if not today, then tomorrow.
[09:49] <nammn_de> manadart: sure, just give me a ping if you can
[10:21] <nammn_de> stickupkid: https://github.com/juju/juju/pull/11173 regards running focal with force
[10:23] <stickupkid> nammn_de, i think this needs to target 2.7
[10:23] <nammn_de> stickupkid: oh yes
[10:31] <nammn_de> stickupkid: https://github.com/juju/juju/pull/11174
[10:33] <stickupkid> nammn_de, lovely
[10:34] <nammn_de> stickupkid: ta!
[12:49] <nammn_de> wow it feels like we didnt forward port 2.7 to develop for ages. I am currently processing all those merge conflicts..
[13:41] <skay> I have a newbie postgresql charm question. by default postgres charm stores backups in /var/lib/postgresql/backups. when using storage, pgdata is /var/lib/postgresql/10/main. is it safe to store the backups folder there instead so that it is saved in the attached storage?
[13:42] <skay> when I've looked at how someone else is configuring this, they just used the defaults
[13:56] <nammn_de> manadart: https://github.com/juju/juju/pull/11177 the one I closed and already resolved mostly. Maybe of help to you.
[15:02] <stub> skay: Better to put your backups dir in /srv/pgdata. /srv/pgdata is the Juju storage mount. /var/lib/postgresql/10/main should be a symlink to /srv/pgdata/10/main IIRC
[15:20] <manadart> nammn_de: Superseding patch: https://github.com/juju/juju/pull/11178
[15:26] <nammn_de> manadart: approved!
[15:26] <manadart> nammn_de: Thanks.
[15:33] <skay> stub: ack
[15:39] <skay> stub: if I understand the source, when storage is detached everything is destroyed. at that point the unit is probably being destroyed so it doesn't matter. do I understand that correctly?
[15:40] <skay> I'm just really paranoid about backups and want to be certain
[15:40] <skay> (by doesn't matter, I mean, I know the destruction is going on and I have backups saved elsewhere)
[16:34] <nammn_de> stickupkid: https://github.com/juju/juju/pull/11179
[16:35] <nammn_de> stickupkid: tested it all, now bootstraps fine for me locally.
[16:35] <stickupkid> nammn_de, wicked
[16:35] <stickupkid> approved
[16:36] <nammn_de> stickupkid: ta!
[17:08] <hml> rick_h:  any issues with canonical irc?
[17:09] <hml> stickupkid: i’m thinking microk8s won’t work in a lxc container.  i get weird results like the enable storage works, but it never reaches a running status
[17:10] <hml> stickupkid: i can also start a pod that never completes
[17:20] <nammn_de> stickupkid and are having issues with irc as well
[17:20] <nammn_de> *s/are/I
[17:21] <stickupkid> nammn_de, i reconnected and it's fine now
[17:56] <stickupkid> nammn_de, it failed
[17:56] <stickupkid> nammn_de, we need to work out why that failed
[17:57] <nammn_de> stickupkid: meh, I am so sad. Maybe I did the lxd configuration wrong?
[17:58] <stickupkid> rick_h, first attempt bootstrapping focal on our CI, it works locally, but fails in CI
[17:58] <stickupkid> rick_h, checking out why now
[17:59] <nammn_de> but I just followed the instruction.. I need to leave now. Gonna look at this later or next week.
[17:59] <nammn_de> stickupkid: if you have some updates I am more than happy to read them =D
[17:59] <stickupkid> nammn_de, sure
[17:59] <stickupkid> ah, so it can't ssh in
[18:02] <stickupkid> rick_h, you got a sec?
[18:13] <rick_h> stickupkid:  sure thing if you're free
[18:13] <stickupkid> daily
[18:13] <rick_h> omw
[19:34] <thumper> morning team
[19:34] <hml> morning thumper
[19:36] <rick_h> morning thumper