[06:10] morning [06:12] morning [06:39] good morning mborzecki [07:17] hey mvo [07:18] hey pstolowski [07:30] good morning [09:13] hmmm ` 2021-07-29 08:31:10 Cannot allocate google:arch-linux-64: google: error getting credentials using well-known file (/home/ubuntu/.config/gcloud/application_default_credentials.json): open /home/ubuntu/.config/gcloud/application_default_credentials.json: permission denied` [09:32] mborzecki: I pinged cacio about this, looks like our vms are unhappy [09:35] hey mvo [09:35] not a happy day [09:41] zyga: indee [09:41] d [09:49] hi, I'm hitting this issue: https://bugs.launchpad.net/snapd/+bug/1938321. could anyone confirm if this is a change in snapd logic? [09:50] ack: could you please add the snapd version to this report? [09:51] ack: that triggred this issue? we had a regression in edge that looks similar but we reverted this I think (but not fully on top of this) [09:54] mvo, done. it happens with latest/stable too [09:54] ack: uh, ok. that is disturbing [09:55] mvo I guess my first question is, is it correct to expect that the service is "enabled inactive" during the upgrade unless the user has previously run "snap stop --disable maas" ? [09:55] pstolowski, ^ [09:55] we're currently checking for "disabled" because we don't want to start the service (which we need to to be able to upgrade) if the user explicitly disabled it [09:55] ack: we did some changes in this area indeed, mostly lead by mardy but he is off today but pawel should also have some insights here [09:55] ok, thanks [10:10] ack: during refresh we stop all services but do not disable them, then start them again for the new revision of the snap - except for those that were explicitly disabled by the user; this logic shouldn't have changed and been like that for a very long time. but i need to check what snapctl reports per your bug report [10:50] pstolowski, so I don't understand how snapd reports the service as disabled. we don't disable it either [10:53] ack: yes, i'm looking at it; do you still have this affected system around? [11:01] pstolowski, yes [11:01] it's very easy to reproduce [11:02] ack: great, could you please attach systemctl status output for the respective service(s), before running refresh? [11:03] ack: can i reproduce it by using maas from different channels? [11:03] pstolowski, yes [11:03] ok trying [11:03] pstolowski, "snap install maas --channel=2.7; maas init --mode all; snap refresh maas --channel=2.8" basically [11:04] pstolowski, https://paste.ubuntu.com/p/VFDn3rw8JV/ [11:04] and [11:04] root@f:~# snap services maas [11:04] Service Startup Current Notes [11:04] maas.supervisor enabled active - [11:05] but it's "disabled" during the refresh hook [11:23] ack: yes, i can see that, reproduced, i'm investigating, unclear yet why [11:24] pstolowski, great, thanks [11:37] pstolowski, please feel free to comment on that bug if you find more info [11:44] ack: sure [11:44] ack: btw did you have this hook logic always like this? [11:44] pstolowski, yeah, since a long time [11:45] why? [11:46] ack: just confused about what, if anything changed/regressed on our side [11:46] we haven't touched that stuff in a long time, 2.8 has been around for over a year [11:48] pstolowski, I have similar logic in a snap of mine, fwiw https://github.com/albertodonato/h2static/blob/main/snap/hooks/configure#L3 [11:52] ack: got it [12:13] ack: ok i think i know what's going on and why it's like that but again, it's been working like this for ages; i'll explain in the bug report in a moment. is it possible you've never hit "all" case before in your hook during the refresh? I see the logic is shared with install hook, but in install hook it's not possible to hit this case afaict [12:14] ack: as for your private snap, configure hook is a different case [12:14] yeah it's only on refresh [12:14] but I'm quite sure we tested it back then. maybe it's been broken for a while and no one noticed but it was working at the time we implemented/tested it? [12:15] is it something you can fix, or should we find a workaround? [12:15] I guess we could drop that check in the hook if needed [12:15] ack: i'll describe it in the bug report and then we will say, maybe others have opinion [12:15] *we will see [12:16] pstolowski, ok, thanks for investigating [12:42] ack: updated the bug report [12:43] pstolowski, thanks [13:18] ack: i wonder if you could do your check in pre-refresh hook (but it runs against the current revision of the snap you're updating from); pre-refresh runs befre stopping the services [13:33] * pstolowski lunch [13:41] https://snapcraft.io/tiled dosnt work on ubuntu 21.04 [13:46] pstolowski, yeah maybe we could, but we'd have to do something like touch a file for the post-refresh one to check. it's actually easier to remove that check, since it should be pretty rare to be in that case where you have "all" mode and disabled the service [14:14] re [16:47] dust: the tiled snap seems to launch fine on my 21.04 machine [16:55] stgraber, hi, is it a known issue that focal containerrs on fedora 34 workstation hosts do not get ipv4 network at all (only link-local ipv6) [16:56] zyga: hmm, no, is it systemd being sad? (`systemctl --failed` in the container) [16:57] stgraber, I will check and get back to you [16:57] stgraber, I suspected cgroupv2 to play a factor but I could not pinpoint any real failure [17:21] ijohnson[m], snap.tiled.tiled.773a1c95-e5f3-4652-bf0b-02bda9c62027.scope: Succeeded. [17:21] The unit UNIT has successfully entered the 'dead' state.