[06:06] <zyga> Good morning
[06:06] <zyga> I will start around 9
[06:49] <mup> PR snapd#8842 closed: tests: fix bug waiting for snap command to be ready <Simple 😃> <Created by mvo5> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/8842>
[06:59] <pstolowski> morning
[06:59] <mup> PR snapd#8816 closed: tests: port 2 uc20 part1 <Created by sergiocazzolato> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/8816>
[07:00] <pedronis> mvo_: hi
[07:13] <zyga> Hey everyone
[07:13] <zyga> I’m around, going to address PR feedback now
[07:13] <zyga> I will be off later for the mri
[07:19] <mvo> pedronis: hi, not sure my earlier "hi" made it to you, for some reason I was put into "quiet" mode by freenode
[07:19] <mvo> pstolowski: good morning
[07:19] <mvo> zyga: good luck for later today
[07:19] <pedronis> mvo: if your nick is not the registered one (like mvo_) you can't talk here
[07:36] <alazred> Hi there! I've asked that question on #snapcraft but I think it fit best here. I have a snap that break after each reboot. it show me this error https://pastebin.com/X3VCfyua. If I reinstall the snap it work again. I'm running snapd on manjaro-arm on the pinebook pro.  Any pointers of what might be going on ? thanks in advance!
[07:37] <zyga> alazred: this means that nothing is loading apparmor profiles
[07:38] <zyga> alazred: what is the status of snapd.apparmor.service?
[07:38] <zyga> alazred: it is responsible for loading snapd apparmor profiles on system startup
[07:40] <alazred> zyga: https://pastebin.com/i1eMy3Md
[07:41] <alazred> zyga: sorry not the good one
[07:41] <zyga> hmm, too bad there are no logs
[07:41] <zyga> do you have persistent journal enabled?
[07:41] <zyga> it would be good to make sure you do (mkdir /var/log/journal)
[07:41] <zyga> and reboot
[07:41] <zyga> and then check the status of that service
[07:42] <alazred> zyga: It's dead  https://pastebin.com/eX3ccSgz
[07:43] <zyga> the service is not enabled
[07:43] <zyga> systemctl enable --now snapd.apparmor.service
[07:45] <alazred> Ive enabled it and it's now working !
[07:45] <alazred> zyga: thanks  !
[07:45] <zyga> alazred: maybe file a bug in manjaro
[07:46] <zyga> alazred: do you remember if it was disabled out of the box?
[07:47] <alazred> zyga: Yeah i will go there also !  The error was not super clear  and since the other apparmor.service was enalble i tought I was ok on that side.
[07:47] <zyga> yeah, until apparmor 3 ships we need a custom service
[07:47] <alazred> zyga: I'm pretty sure it was
[07:47] <zyga> apparmor 3 won't need this
[07:47] <zyga> ok
[07:48] <alazred> but since it's arch base I don't know if it enable those things on install.
[07:48] <alazred> I'll try to know more on that. Thanks again for your time!
[07:49] <zyga> alazred: thanks for reaching out :)
[07:59] <alazred> zyga: They dosen't do mention of snapd.apparmor.service in the manjaro wiki. Only to enable snapd.socket.
[08:00] <zyga> alazred: I see, I think it needs fixing
[08:00] <zyga> I reached out to some manjaro friends and let them know
[08:00] <mborzecki> morning
[08:00] <alazred> zyga: Ok thanks !
[08:05] <mup> PR snapd#8841 closed: gadget: drop dead code, hide exports that are not used externally <Created by bboozzoo> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/8841>
[08:05] <mup> PR snapd#8844 opened: asserts/internal: add some iteration benchmarks, potential opt  <Created by pedronis> <https://github.com/snapcore/snapd/pull/8844>
[08:24] <pedronis> pstolowski: I tried to answer your comment in #8823, also made some small changes based on mborzecki feedback
[08:24] <mup> PR #8823: asserts/internal: limit Grouping size switching to a bitset representation <Bulk assert refresh :scroll::scroll::scroll:> <Created by pedronis> <https://github.com/snapcore/snapd/pull/8823>
[08:25] <pstolowski> thanks
[08:48] <zyga> mvo: I'm seeing this error, not sure if related to my PR or just random
[08:49] <zyga> https://www.irccloud.com/pastebin/Xh7y61FS/
[08:49] <mvo> zyga: oh, fun. looks like the earlier incorrect loop masked some other problem :(
[08:49] <mvo> zyga: how often/where do you see it?
[08:50] <zyga> just noticed a few systems failed this way on my RFC PR
[08:50] <zyga> https://github.com/snapcore/snapd/pull/8843/checks?check_run_id=756927988
[08:50] <mup> PR #8843: [RFC] many: export tools from core/snapd to mount namespaces <Needs Samuele review> <Created by zyga> <https://github.com/snapcore/snapd/pull/8843>
[08:50] <zyga> look at the colors on the left
[08:50] <zyga> lots of green, a few reds
[08:50] <mvo> zyga: quire a few reds :/
[08:50] <zyga> yeah
[08:50] <zyga> I wonder what was going on before
[08:50] <mvo> zyga: ok, I have a look. odd that it did not appear in the original PR
[08:50] <zyga> it doesn't seem like a new bug
[08:50] <zyga> just surfaced
[08:51] <mvo> zyga: yeah, I'm sure the superlong timeout just masked the issue
[08:51] <zyga> it *might* be something my PR does
[08:51] <zyga> but I'm not sure
[08:51] <zyga> one sec
[08:51] <zyga> Wed, 10 Jun 2020 08:30:40 GMT
[08:51] <zyga> retry: next attempt in 1.0 second(s) (attempt 6 of 120)
[08:51] <zyga> 730
[08:51] <zyga> Wed, 10 Jun 2020 08:30:40 GMT
[08:51] <zyga> + snap wait system seed.loaded
[08:51] <zyga> 731
[08:51] <zyga> Wed, 10 Jun 2020 08:30:40 GMT
[08:52] <zyga> error: cannot communicate with server: timeout exceeded while waiting for response
[08:52] <zyga> 732
[08:52] <zyga> with time stamps
[08:52] <zyga> this is not after a long wait
[08:52] <zyga> but this is surprising
[08:52] <zyga> so retry *succeeded*
[08:52] <zyga> after 6 seconds
[08:52] <mvo> zyga: yeah, I suspect snapd is restarting or something
[08:53] <zyga> but the "snap wait" immediately after that fails
[08:53] <zyga> yeah
[08:53] <zyga> maybe snap wait should be more robust
[08:53] <zyga> I think this may be what was randomly failing earlier as well
[08:53] <zyga> btw, those timestamps *rock*
[08:54] <mvo> zyga: yeah, snap should actually retry :/
[08:54] <zyga> heh
[08:54] <zyga> retry snap wait
[08:54] <zyga> ;)
[08:58] <pedronis> we have the wait.go logic but is meant for things that give back a Change
[08:58] <pedronis> but we need some bits of it
[08:58] <pedronis> for wait
[08:58] <pedronis> itself
[09:01] <pedronis> this kind of code: https://github.com/snapcore/snapd/blob/master/cmd/snap/wait.go#L96
[09:02] <mvo> pedronis: thank you
[09:45] <mup> PR snapd#8845 opened: [RFC] many: add "system.service.snapd-autoimport.disable" setting <Created by mvo5> <https://github.com/snapcore/snapd/pull/8845>
[09:47] <mvo> zyga: I can look at the wait issue now
[09:47] <zyga> mvo: thank you
[10:00] <mup> PR snapd#8808 closed: riscv64: bump timeouts <Created by xnox> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/8808>
[10:13] <zyga> afk
[10:24] <mborzecki> zyga: are you still using fish?
[10:25] <mvo> zyga: it's all broken
[10:27] <zyga> mvo: what?
[10:27] <zyga> mborzecki: yes
[10:27] <zyga> mvo: what's going on?
[10:27] <zyga> sorry, not having most productive day
[10:27] <zyga> I feel the best in a position where typing is not possible
[10:28] <mborzecki> zyga: can you try `snap <TAB>`, does it show the commands and their help?
[10:30] <mvo> zyga: the wait thing you pointed me to is broken in 3 different ways on 16/18/20 :(
[10:30] <mvo> zyga: anyway, I'm looking
[10:30] <zyga> mvo: woah
[10:30] <zyga> mborzecki: trying
[10:30] <zyga> yes
[10:30] <zyga> it super nice way :)
[10:30] <zyga> how do we screenshot on linux?
[10:30] <zyga> hmm
[10:30] <mup> PR snapd#8846 opened: tests: move update-related tests to snapstate_update_test.go <Test Robustness> <Created by stolowski> <https://github.com/snapcore/snapd/pull/8846>
[10:31] <zyga> mborzecki: check this out https://usercontent.irccloud-cdn.com/file/K78qpiep/fish-tab-completion.png
[10:32] <mborzecki> yeah, it's in the upstream https://github.com/fish-shell/fish-shell/blob/master/share/completions/snap.fish
[10:32] <zyga> super pretty
[10:33] <zyga> not using bash helpers but I guess for some reasons
[10:33] <mborzecki> zyga: there was some problems reported in the forum though https://forum.snapcraft.io/t/tab-completion-for-snap-connect-in-fish-spams-errors/18083
[10:33] <mborzecki> it wfm though
[10:33] <zyga> yeah
[10:33] <zyga> that is broken
[10:34] <mborzecki> mhm
[10:34] <mborzecki> yeah, it's the connect/disconnect thing
[10:34] <mborzecki> that warning should probably go to stderr
[10:34] <zyga> mborzecki: snap connect <tab> SNAFU https://usercontent.irccloud-cdn.com/file/KdWyk66Z/snap-connect-broken-completion.png
[10:35] <zyga> mvo: can you tell some more
[10:35] <zyga> mvo: it's really interesting
[10:36] <mvo> zyga: the first (16) stops with a connection timeout exceeded
[10:36] <mvo> zyga: the second (18) hangs forever
[10:36] <zyga> whaat?
[10:36] <mvo> zyga: and 20 fails with dpkg --purge dir not empty
[10:36] <zyga> and it was just this broken/
[10:36] <zyga> how did we miss it?
[10:36] <mvo> zyga: I start with 20 now that looks most simple
[10:36] <mvo> zyga: no idea, 20 could be just coincidence
[10:36] <mborzecki> zyga: heh it calls `snap interfaces $snap | string replace -r '[- ]*([^ ]*)[ ]+([^ ]+)' '$2$1' | string match -v "*Slot*"`
[10:38] <mborzecki> hm i'll to fix it upstream
[10:39] <zyga> one thing I love in fish
[10:39] <zyga> is that $foo is not unsafe
[10:39] <zyga> as fish does not expand strings this way
[10:39] <zyga> it's weird coming from bash
[10:39] <zyga> but nice
[10:41] <zyga> mvo: sorry for dragging you into this
[10:44] <mvo> zyga: my own fault for doing the original PR
[10:44] <pstolowski> mvo, mborzecki : is #8784 still a draft?
[10:44] <mup> PR #8784: snap: add new `snap run --experimental-gdbserver` option <Created by mvo5> <https://github.com/snapcore/snapd/pull/8784>
[10:45] <mborzecki> pstolowski: switched it to 'ready'
[10:45] <mvo> pstolowski: I think we can switch it
[10:45] <pstolowski> indeed, thanks
[10:45] <mborzecki> it's probably missing some tests tho, we can add those once we get some reviews
[10:54] <mborzecki> zyga: this should fix it https://github.com/fish-shell/fish-shell/pull/7104
[10:54] <mup> PR fish-shell/fish-shell#7104: completions/snap: workaround snap interfaces deprecation notice <Created by bboozzoo> <https://github.com/fish-shell/fish-shell/pull/7104>
[10:54] <zyga> nice
[10:55] <mborzecki> hm zomething we could distro patch for instance in ubuntu?
[11:00] <zyga> I'm sure we can
[11:00] <zyga> mvo:  cna you dput to ubuntu?
[11:01] <mborzecki> (once it lands upstream first)
[11:01] <mvo> zyga: dput what?
[11:01] <zyga> mvo: fish
[11:02] <mvo> zyga: I can/could to groovy, focal will need a SRU
[11:02] <zyga> grovy would be good
[11:04] <mborzecki> i'll file a bug first
[11:06] <pedronis> mborzecki: looks like #8830 can be merged?
[11:06] <mup> PR #8830: bootloader: introduce bootloarder assets, import grub.cfg with an edition marker <UC20> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/8830>
[11:07] <mborzecki> it's green yay! at last
[11:07] <mvo> zyga: and none of the wait issues you saw in the PR is reproducible so far :(
[11:07] <zyga> hmmm
[11:08] <zyga> maybe my PR is at fault
[11:08] <zyga> are you trying on master or on that branch?
[11:08] <mvo> zyga: master
[11:08] <mvo> zyga: let me try your PR
[11:08] <pedronis> is the export PR?
[11:08] <zyga> if it's just that PR it's not worth spending time on it
[11:08] <zyga> I thought you found some deeper issues on master
[11:10] <mvo> zyga: ok, I park it for now (if the current run is also not failing)
[11:11] <mup> PR snapd#8830 closed: bootloader: introduce bootloarder assets, import grub.cfg with an edition marker <UC20> <Created by bboozzoo> <Merged by bboozzoo> <https://github.com/snapcore/snapd/pull/8830>
[11:16] <mvo> pedronis, zyga a log of the 20.04 "device not ready" we talked about yesterday with the debug log https://github.com/snapcore/snapd/pull/8845/checks?check_run_id=757318867 - I'm now trying to make sense of it
[11:16] <mup> PR #8845: [RFC] many: add "system.service.snapd-autoimport.disable" setting <Needs Samuele review> <Created by mvo5> <https://github.com/snapcore/snapd/pull/8845>
[11:18] <zyga> Jun 10 10:09:33 jun100957-926530 snapd[35371]: snapmgr.go:298: cannot read snap info of snap "snapd" at revision 7777: cannot find installed snap "snapd" at revision 7777: missing file /snap/snapd/7777/meta/snap.yaml
[11:18] <zyga> that's super clear IMO
[11:18] <zyga> mvo: this is the bug that we talked earlier about
[11:18] <zyga> mvo: about someone showing a system where booting lost interfaces
[11:18] <zyga> because core was not mounted on time
[11:18] <zyga> but
[11:19] <zyga> how can this be happening!?
[11:19] <zyga> this is what is so myserious about it
[11:19] <zyga> are we racing with mounting that we do ourselves?
[11:19] <pedronis> here there is no core snap at all
[11:19] <pedronis> to be clear
[11:19] <pedronis> there is no snapd snap either
[11:19] <zyga> this is also weird: Jun 10 10:00:06 jun100957-926530 systemd[1]: snapd.service: State 'stop-sigterm' timed out. Killing.
[11:19] <mvo> zyga: oh, the mount units could be too slow to start?
[11:20] <zyga> and this killing was after we did this
[11:20] <zyga> Jun 10 10:00:06 jun100957-926530 systemd[1]: snapd.service: State 'stop-sigterm' timed out. Killing.
[11:20] <zyga> and this kicks off the snapd.failure unit Jun 10 10:00:06 jun100957-926530 systemd[1]: snapd.service: Triggering OnFailure= dependencies.
[11:21] <zyga> hmmmm
[11:21] <zyga> mvo: is it possible that snapd failing unmounts snap units?
[11:21] <zyga> like undoing some dependency chain or something
[11:21] <mvo> zyga: possibly seems unlikely but I guess we need to explore whatever we can find
[11:22] <pedronis> mvo: but remember there should be no snapd at all at that point here
[11:22] <mvo> zyga: hm, two interessting things - the image has snaps seeded already, I think that's the only image we have with that property
[11:23] <pedronis> we just reinstalled the snapd deb
[11:23] <mvo> zyga: there is also a error auto-refresh change
[11:23] <pedronis> it seems like there's state from something else around
[11:23] <mvo> pedronis: it looks like the image already has snapd in the image (and lxd/core18)
[11:23] <pedronis> yea
[11:23] <pedronis> and removing snapd doesn't do the right thing
[11:24] <zyga> oh interesting!
[11:24] <pedronis> this is prepare.sh
[11:24] <pedronis> it was written for systems which just the snapd deb
[11:24] <pedronis> but possibly with 20.04 there's a ton of stuff there
[11:27] <pedronis> we just install the new deb?
[11:27] <mvo> pedronis: yes
[11:27] <mvo> pedronis: it sounds like we need to purge snapd first
[11:27] <pedronis> that probably doesn't work well
[11:27] <mvo> maybe that's enough
[11:27] <pedronis> if snapd is in the middle
[11:27] <pedronis> of seeding
[11:27] <pedronis> there's a real bug here I suppose
[11:27] <pedronis> what should happen if you upgrade the deb while snapd is seeding
[11:28] <mvo> right
[11:29] <pedronis> not saying that we need to fix that to fix prepare.sh
[11:29] <pedronis> but we defintely need to check what happens, what we want to happen
[11:29] <pedronis> in that scenario
[11:29] <mvo> yeah
[11:30] <pedronis> it seems you can get a fairly weird state
[11:34] <mvo> pedronis: looking at the looks it seems like the seeding is already done, it happend 43 days ago (probably when the image was booted first and preapared for our needs)
[11:34] <mvo> pedronis: I suspect the issue happens when a auto-refresh is in progress while we update the deb
[11:35] <mvo> pedronis: but it's a similar issue I guess(?)
[11:35] <pedronis> yes
[11:35] <pedronis> I don't think we have thought through this
[11:36] <pedronis> maybe it's only relevant if the deb has a newer snapd?
[11:36] <mvo> yeah - and it seems real given that andy had this bug on a ubuntu 20.04 system some days ago
[11:36] <mvo> pedronis: oh, that's interessting
[11:39] <pedronis> naively the main issue in prepare.sh is that we have a state.json with seeded
[11:39] <pedronis> but it doesn't relate to the installing snapd
[11:39] <pedronis> but I don't know if there's other interference possible
[11:39] <pedronis> especially if the deb version is newer
[11:41] <pedronis> it's a bit confusing because when it works we end up with no snaps except core20
[11:43] <mvo> pedronis: what is most interessting to me is that we see the mount units vanishing, this looks like the bug we had the other day. I would like to understand this :/ the fix in prepare.sh is easy (like you said) we can purge and things should work. anyway, I have lunch first but this looks interessting and like there is something real lurking here
[11:43]  * mvo lunch
[11:47] <pedronis> mmh, we have also our strange hack that removes the seed
[11:47] <pedronis> though I suppose only if it's invalid
[12:00]  * zyga preps for the MRI
[12:00] <zyga> see you laster
[12:00] <zyga> *later
[12:00] <ogra> good luck
[12:01] <zyga> mvo: purge unmounts
[12:01] <zyga> mvo: so snapd may be doing stuff while we purge and purge and purge
[12:01] <zyga> (just a guess)
[12:18]  * pstolowski going to the vet, bbl
[12:51] <clmsy> hi everyone
[12:51] <clmsy> i have a question about building a custom image with ubuntu core 20
[12:51] <clmsy> im specially interested in trying to build with grade:secure option
[12:51] <clmsy> i modified my model json file and trying to sign it
[12:51] <clmsy> i get this error: error: cannot assemble assertion model: cannot specify a grade for model without the extended snaps header
[12:52] <clmsy> now it kinda makes sense because on this page, every snap is put with its identifier: https://docs.ubuntu.com/core/en/releases/uc20
[12:52] <clmsy> but the sign function from snapcraft expects a json
[12:52] <clmsy> do i have to somehow insert this information to required-snaps list
[12:54] <clmsy> i guess maybe i have to remove "require-snaps" andd replace it in json with "snaps" as a map not just a list of strs
[12:55] <mvo> clmsy: hey, check https://github.com/snapcore/models/blob/master/ubuntu-core-20-amd64.json for an example
[12:55] <mvo> clmsy: there are some more in https://github.com/snapcore/models
[12:55] <clmsy> aaah its already in here thanks a lot!
[13:34] <clmsy> hmm
[13:35] <clmsy> did anyone get pc-kernel snap not found while trying to build a core20 image ?
[13:35] <ijohnson> clmsy: what branch are you using for your kernel snap you need to be using the 20/ channel
[13:36] <clmsy> 20/beta
[13:37] <clmsy> i mean surely its there since snap info shows: 20/beta:          5.4.0-34.38.1  2020-06-08 (524) 271MB -
[13:37] <clmsy> haha
[13:38] <ijohnson> clmsy: hmm then I don't know what the cause of that issue is then
[13:49] <pstolowski> re
[13:56]  * mvo hands pstolowski the "biggest-PR-of-the-year" award
[13:56] <pstolowski>  /o\
[13:58] <mvo> pstolowski: heh - not your fault that snapstate_test grew to such monstrous size :) thanks for cleaning it up !
[13:58]  * mvo hands pstolowski the "Augean Stables" award instead
[13:58] <pstolowski> LOL
[14:00] <mborzecki> haha
[14:10] <zyga> Back home
[14:10] <zyga> Need rest
[14:10] <zyga> Darn pain
[14:12]  * mvo hugs zyga (gently)
[14:17] <zyga> I won’t be doing those 12 hour flights anytime soon
[14:19] <mup> Bug #1882957 opened: Snapd `internal error: connection "[slot] [plug]" not found in state` <Snappy:New> <https://launchpad.net/bugs/1882957>
[14:19] <cachio> pedronis, this is what I updated in spread: https://github.com/snapcore/spread/pull/72
[14:19] <mup> PR spread#72: Skip tests <Blocked> <Reviewed> <Created by sergiocazzolato> <https://github.com/snapcore/spread/pull/72>
[14:19] <cachio> pedronis, today I'll push another implementation using SKIP command
[14:20] <cachio> I need to make that work better
[14:20] <cachio> before pushing
[14:23] <clmsy> im almost able to build a custom image w core20, one last road bump im having
[14:23] <clmsy> error: cannot add snap "network-manager" without also adding its base "core" explicitly
[14:23] <clmsy> now i have type:base core20
[14:23] <clmsy> do i need to add say core16 without type:base or somehow add core16 base information to snap header of network-manager
[14:24] <ogra> either add "core" to your required-snaps in the model assertion or find a different track for network-manager
[14:24] <ogra> (i'm sure at least a core18 track exists for it)
[14:24] <clmsy> ok thank you, ill try figure that out
[14:24] <pedronis> clmsy: you need to use type: core for core
[14:25] <clmsy> ok i guess as long as type:base is core20 i am allowed to add more type:core
[14:25] <ogra> well, the NM he installs has seemingly "base: core"
[14:25] <ogra> so "core" needs to be in the image as well ... or an NM with core18 or core20 base
[14:26] <pedronis> yes, he can do that by having an entry in snaps with type: core  name: core id: of core
[14:26] <clmsy> thanks for the fast feedback!
[14:26] <ogra> ah, in the model assertion ? (thats new !)
[14:26] <pedronis> ogra: in core20 models requires-snaps is called snaps and has much more syntax
[14:26] <pedronis> it's a list of maps
[14:27] <ogra> neat
[14:27] <ogra> !
[14:27] <ogra> (i have admittedly not looked a lot at 20 yet)
[14:27] <pedronis> ogra: https://github.com/snapcore/models/blob/master/ubuntu-core-20-amd64.model for an example
[14:28] <ogra> (some docs, howtos and tutorials would be really helpful)
[14:28] <mup> Bug #1882957 changed: Snapd `internal error: connection "[slot] [plug]" not found in state` <Snappy:New> <https://launchpad.net/bugs/1882957>
[14:28] <ogra> oh, this is really cool !
[14:28] <ogra> can you omit the id filed for using local snaps though ?
[14:28] <clmsy> yeah i like the syntax and the reason i jumped into trying to utilize is that, one of our main images that we use, i needed to implement full disk encryption and secureboot
[14:28] <clmsy> than i saw that with core20 i can utilise grade:secured for that feature
[14:29] <ogra> (local snaps are pretty essential during development)
[14:29] <pedronis> ogra: yes, but only in grade: dangerous
[14:29] <ogra> good
[14:29] <ogra> thats good enough
[14:29] <pedronis> yes, it's supported exactly for development
[14:29] <ogra> yeah 🙂
[14:31] <mup> Bug #1882957 opened: Snapd `internal error: connection "[slot] [plug]" not found in state` <Snappy:New> <https://launchpad.net/bugs/1882957>
[14:33] <mvo> cachio: silly question - I have one spread run for ubuntu-core-20-64 (https://github.com/snapcore/snapd/pull/8845/checks?check_run_id=757318867) that uses an image that has snaps installed. when I run the same test here with my local spread against google I get a ubuntu 20.04 base image without snaps. any idea how this can happen? why would we get different images for 20.04?
[14:33] <mup> PR #8845: [RFC] many: add "system.service.snapd-autoimport.disable" setting <Needs Samuele review> <Created by mvo5> <https://github.com/snapcore/snapd/pull/8845>
[14:34] <cachio> mvo, let me check the pr
[14:34] <cachio> one sec
[14:36] <zyga> mvo: interesting failure in repartition magic
[14:36] <zyga> https://github.com/snapcore/snapd/pull/8829/checks?check_run_id=757631787
[14:36] <mup> PR #8829: sandbox/cgroup: add tracking helpers <Created by zyga> <https://github.com/snapcore/snapd/pull/8829>
[14:36] <mup> PR snapd#8847 opened: tests: fail in setup_reflash_magic() if there are snaps already <Test Robustness> <Created by mvo5> <https://github.com/snapcore/snapd/pull/8847>
[14:37] <zyga> I would add retry test -e /dev/loop1p2 before that settle call
[14:37] <zyga> I suspect we just call settle in a racy way
[14:37] <zyga> since the partition may not exist yet
[14:37] <zyga> (on line 547 in the log)
[14:37] <cachio> mvo, agree with zyga, it has to be a race
[14:38] <zyga> I can send patches later but I really need to rest in a neutral position now
[14:38] <zyga> so away from screen and kbd
[14:39] <cachio> mvo, I'll try to reproduce it here to see how to fix it
[14:40] <clmsy> well guys i think you might wanna add multiple core support on the model because the image tool is not allowing this
[14:40] <clmsy> error: core snap has unexpected type: base
[14:40] <clmsy> here ill show you how the model assertion looks like
[14:41] <clmsy> https://pastebin.com/BhXhNqM7
[14:41] <clmsy> i tried removing type information from core18 and core as well
[14:41] <clmsy> still complains
[14:42] <ijohnson> clmsy: core18 should not have `type: core`
[14:42] <ijohnson> clmsy: core18 should have `type: base`
[14:42] <ijohnson> I think
[14:42] <clmsy> hmm
[14:42] <clmsy> i can try that sure
[14:43] <clmsy> gut feeling was that i thought having 3 snaps set as type: base might cause an issue
[14:43] <ijohnson> which snap is used as the actual base for the image is decided by the base header in the root of the model, not by the type: base in the snaps header iirc
[14:50] <pedronis> ijohnson: correct
[14:51] <pedronis> that's the reason it exists, we don't support many gadgets or kernels, but there can be many bases, but only one is the boot/root base
[14:54] <clmsy> yes thank you :)
[15:01] <mup> PR snapcraft#3166 closed: plugin handler: load legacy plugins prefixed with 'x-' <bug> <Created by cjp256> <Merged by sergiusens> <https://github.com/snapcore/snapcraft/pull/3166>
[15:16] <mvo> cachio: is there any way to figure out what base image https://github.com/snapcore/snapd/pull/8845/checks?check_run_id=757318867 was using? i.e. is there a way for me to login into that image? I tried spread -debug google:ubuntu-core-20-64 but the image I get from that is different, it has no snaps but the log there shows that there are snaps in the image that failed in the test
[15:16] <mup> PR #8845: [RFC] many: add "system.service.snapd-autoimport.disable" setting <Needs Samuele review> <Created by mvo5> <https://github.com/snapcore/snapd/pull/8845>
[15:17] <cachio> mvo, yes
[15:17] <cachio> but in case of core is difficult because the base image is focal and we update it during the prepare
[15:18] <cachio> mvo, so, to login to core-20 image then only way is to add an exit 1 in some place and use -debug
[15:19] <cachio> if you want to log into the focal image used as base you can do the following in spread-images project
[15:20] <cachio> spread -shell google:ubuntu-20.04-64:tasks/google/start-instance
[15:20] <cachio> you need to comment the service account line in the spread.yaml to make that work
[15:22] <mup> PR snapcraft#3167 opened: unit tests: move to pytest <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/3167>
[15:22] <mvo> cachio: right, I got to this point but even with exit in setup_reflash_magic I get an image that looks different, i.e. something without snaps at this point
[15:22] <mvo> cachio: but in the log of 8845 I see there is core18/lxd installed in the image
[15:22] <mvo> cachio: and I wonder how this is possible :/
[15:24] <cachio> mvo, lxd?
[15:25] <cachio> mvo, you see that in snap changes output right?
[15:25] <mvo> cachio: yeah, if you look at https://github.com/snapcore/snapd/pull/8845/checks?check_run_id=757318867 and look at line 365
[15:25] <mup> PR #8845: [RFC] many: add "system.service.snapd-autoimport.disable" setting <Needs Samuele review> <Created by mvo5> <https://github.com/snapcore/snapd/pull/8845>
[15:25] <mvo> cachio: aha, I can even link to it https://github.com/snapcore/snapd/pull/8845/checks?check_run_id=757318867#step:5:365
[15:26] <cachio> mvo, yes, let me check the base image
[15:26] <mvo> cachio: thanks, please check if we have any base image with that, if so I would love to login into one to debug this further :)
[15:28] <cachio> mvo, for ubuntu core 20 we are using the image ubuntu-2004-64-virt-enabled as base
[15:28] <cachio> not sure why we are not using the regular one
[15:28] <cachio> can't remember
[15:29] <mvo> cachio: can you check on this base image if it has snaps?
[15:29] <cachio> mvo, starting the image
[15:29] <cachio> almost started
[15:29] <mvo> cachio: thank you
[15:29] <cachio> but takes few minutes
[15:30] <cachio> mvo, core18 and lxd installed
[15:31] <cachio> mvo, let me check why
[15:31] <mvo> cachio: oh, nice. let's keep it this way for now, I want to do some debugging
[15:33] <mvo> cachio: so here is something crazy - if I setup an "exit 1" in setup_reflash_magic and run spread google:ubuntu-core-20-64:tests/main/smoke I get a shell that does not contain snaps. am I just confused?
[15:36] <cachio> mvo, hehe, the snaps should not be there for sure, and are not installed during images are updated
[15:36] <zyga> ijohnson, mborzecki: can you please look at https://github.com/snapcore/snapd/pull/8848 and think if we have a similar problem anywhere in non-test code?
[15:36] <mup> PR #8848: tests: wait after creating partitions with sfdisk <Test Robustness> <Created by zyga> <https://github.com/snapcore/snapd/pull/8848>
[15:37] <cachio> mvo, I think those are comming from the base image
[15:37] <mup> PR snapd#8848 opened: tests: wait after creating partitions with sfdisk <Test Robustness> <Created by zyga> <https://github.com/snapcore/snapd/pull/8848>
[15:37] <zyga> mvo: could you be getting a different region
[15:37] <zyga> and a different image in that region?
[15:37] <mvo> zyga: ohhh, yes
[15:37] <zyga> (somehow)
[15:37] <mvo> cachio: is that possible? what zyga said, that we get diffrent images for different regions that might be slightly different? so when I run from my local machine I get a different result than github actions?
[15:38] <mvo> cachio: fwiw, I really want to run with the snaps because I would love to debug why this fails
[15:38] <cachio> mvo, ah, ok, makes sense
[15:39] <cachio> mvo, in parallel I need to check why are those snaps installed
[15:39] <cachio> google:ubuntu-20.04-64-base .../tasks/google/start-instance# snap list
[15:39] <cachio> Name              Version   Rev    Tracking         Publisher          Notes
[15:39] <cachio> core18            20200427  1754   latest/stable    canonical✓         base
[15:39] <cachio> google-cloud-sdk  296.0.0   135    latest/stable/…  google-cloud-sdk✓  classic
[15:39] <cachio> lxd               4.2       15457  latest/stable/…  canonical✓         -
[15:39] <mvo> cachio: can you find out somehow ? i.e. is there a setting somewhere in the website for gce etc?
[15:39] <cachio> snapd             2.45      7777   latest/stable    canonical✓         snapd
[15:39] <mvo> cachio: nice! yeah, that's what I see in the logs
[15:40] <cachio> mvo, I just started an image which is not ours
[15:40] <cachio> it is from cloud team
[15:40] <mvo> cachio: anyway, in a meeting let's talk a bit more later
[15:40] <cachio> mvo, sure, I am having lunch now
[15:40] <kyrofa> jdstrand, can you help explain this denial? Log: apparmor="DENIED" operation="bind" profile="snap.nextcloud.occ" pid=19769 comm="loolwsd" family="unix" sock_type="stream" protocol=0 requested_mask="bind" denied_mask="bind" addr="@6C6F6F6C7773642D74634C4838366E3700000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
[15:40] <mvo> cachio: can I modify spread.yaml somehow to get that image? after lunch is fine :)
[15:42] <jdstrand> kyrofa: aa-decode 6C6F6F6C7773642D74634C4838366E3700000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
[15:42] <jdstrand> Decoded: loolwsd-tcLH86n7
[15:42] <jdstrand> kyrofa: that abstract socket is not snap-specific
[15:42] <cachio> mvo, yes
[15:42] <zyga> kyrofa: aa-decode is your friend
[15:42] <cachio> mvo, just use -> image: ubuntu-os-cloud/ubuntu-2004-lts
[15:43] <kyrofa> Huh, I've never used aa-decode before
[15:43] <zyga> kyrofa: first time for everything
[15:43] <jdstrand> kyrofa: it should match: @snap.@{SNAP_INSTANCE_NAME}.**
[15:43] <jdstrand> it isn't a tool that you normally need, but when you do, it is handy :)
[15:44] <kyrofa> jdstrand, what is SNAP_INSTANCE_NAME in this context?
[15:45] <jdstrand> kyrofa: 'nextcloud'
[15:45] <kyrofa> Ah okay
[15:45] <jdstrand> ie, they should use @snap.nextcloud.anything-they-want
[15:46] <jdstrand> if they want to support parallel installs, they should look at $SNAP_INSTANCE_NAME from the environment to obtain 'nextcloud'
[15:47] <jdstrand> the snap probably isn't parallel-installable now for a bunch of reasons though
[15:47] <jdstrand> (eg, port binding)
[15:48] <kyrofa> jdstrand, no it's not, for reasons like this: https://forum.snapcraft.io/t/run-configure-hook-of-nextcloud-1-snap-run-hook-configure-error-error-running-snapctl-unknown-service-nextcloud-apache/11422
[15:49]  * cachio lucnh
[15:50] <kyrofa> But yeah, ports would be problematic as well
[15:50] <cachio> mvo, I need to edit out scripts to clean preinstalled snaps
[15:50] <mvo> cachio: let's talk after lunch/meeting
[15:52]  * zyga looks at a conflict and sighs
[15:54] <alazred> zyga: The manjaro-arm team has already updated their snapd package to add the systemd.apparmor.service. Should be alright now! Thanks again for your help!!
[15:54] <zyga> alazred: wooot :)
[15:54] <zyga> alazred: how is the arm laptop?
[15:57] <alazred> zyga: Feel good still some quirks to go around since it's new hardware. But it is still a good piece of hardware! I've been playing around with it for a week and so far I'm really pleased with it !
[16:02] <alazred> zyga: The only really negative thing so far are the speakers... They are useless. you need to be in an almost silent room to hear anything. Other then that pretty good.
[16:02] <zyga> yeah, speakers are often neglected in laptops
[16:03] <alazred> those are probably the worst ;)  but really for the price it's hard to complain
[16:04] <ogra> who uses them anyway
[16:06] <zyga> speakers?
[16:07] <mvo> cachio: I changed the image for ubuntu-core-20-64 to ubuntu-os-cloud/ubuntu-2004-lts and still no snaps installed. so it looks like I still get a different image under some cistumstances :/
[16:08] <zyga> mvo: echo the root password and IP address and run a PR
[16:08] <zyga> mvo: then get int
[16:08] <zyga> and check cloud init data
[16:08] <zyga> and maybe the image
[16:09] <zyga> you may get cloud region info there too
[16:09] <zyga> mvo: or jokes adise
[16:09] <zyga> *aside
[16:09] <zyga> spread tests are invoked from a machine we can ssh to
[16:09] <zyga> so ...
[16:09] <zyga> just saying
[16:09] <zyga> if you need I can help
[16:09] <mvo> zyga: could init data is a good idea, let me try this now
[16:09] <zyga> mvo: if you want I can just give you shell access
[16:09] <zyga> and you can spawn spread by hand
[16:09] <zyga> just like tests do
[16:09] <zyga> and debug away
[16:10] <mvo> zyga: are you in an image with snaps pre-installed?
[16:10] <zyga> mvo: I didn't try but if the theory is that location matters
[16:10] <zyga> well, I can give you location
[16:11] <kyrofa> jdstrand, regarding that abstract socket denial, I wanted to make sure you knew that snappy-debug didn't give me any advice
[16:11] <kyrofa> It just gave me the log
[16:12] <mvo> zyga: let me first try from my location and check the cloud init data, in any case, we should not get totally differfent images so I really want to understand this
[16:12] <kyrofa> In case that's something you wanted to support
[16:12] <zyga> ok
[16:12] <zyga> yeah, I'm +10 on getting to the bottom of it
[16:14] <jdstrand> kyrofa: there is a bug on that
[16:15] <jdstrand> err, a checklist item in a card
[16:15] <jdstrand> but thanks
[16:16] <jdstrand> oh, actually, I fixed that in edge already
[16:16] <jdstrand> kyrofa: ^
[16:17] <jdstrand> I need to do some 2.45 policy updates for snappy-debug, when I do, the fix will be in stable
[16:17] <kyrofa> jdstrand, nice!
[16:18] <jdstrand> kyrofa: actully, I think it needs a small tweak for the encoded path
[16:19] <jdstrand> kyrofa: but that fix will be in stable
[16:19] <jdstrand> when I push it
[16:22] <mvo> zyga: meh, no information in /run/cloud-init about the image booted it seems
[16:22] <zyga> hmmm
[16:23] <zyga> nothing? when I was looking at the /home/ubuntu mystery I did see some data (more than I could understand)
[16:23] <zyga> maybe something in journald?
[16:27]  * zyga limps out of bed to get some tea
[16:34] <mvo> cachio, zyga things starting to make slightly more sense now - so the ubuntu-20.04-64 image has lxd/core18 installed. we have code in prepare-restore.sh to purge snapd. However as evident in 8845 this code is not always run or does not always work. so I think that is the issue, I'm on it, no need to change anything on the image - I strongly suspect the purge fails under some circumstances
[16:35] <ijohnson> mvo if the image you booted was in GCE it should definitely have /run/cloud-init ?
[16:35] <zyga> re
[16:35] <mvo> ijohnson: sorry, I was not clear. it does have /run/cloud-init just no information what the image name is
[16:35] <ijohnson> ah ok
[16:35] <zyga> mvo: could it be that the sequence matters
[16:35] <zyga> and depending on which suite runs first
[16:36] <zyga> mvo: try looking at the log you saw from the failure
[16:36] <zyga> what was the first suite on that machine?
[16:37] <mvo> zyga: we purge as part of --prepare-project so that should be ok
[16:37] <zyga> I see
[16:37] <zyga> hmmmmmmm
[16:37] <zyga> that is weird then
[16:37] <mvo> zyga: if it's an incomplete purge that explains also why the mount units are not there anymore
[16:40]  * zyga returns to resolving conflicts
[16:40] <pedronis> pstolowski: should I re-review #8812 ?
[16:40] <mup> PR #8812: o/snapstate: service-control task handler (4/N) <Needs Samuele review> <Services ⚙️> <Created by stolowski> <https://github.com/snapcore/snapd/pull/8812>
[16:42] <mup> PR snapd#8847 closed: tests: fail in setup_reflash_magic() if there are snaps already <Test Robustness> <Created by mvo5> <Closed by mvo5> <https://github.com/snapcore/snapd/pull/8847>
[16:46] <cachio> mvo, ah, ok
[16:46] <cachio> well, just let me know if we need to clean up the image before publish it
[16:48] <mvo> cachio: please keep it for now
[16:48] <cachio> mvo, sure, np
[16:48] <mvo> cachio: I pushed a new PR that hopefully helps figuring out what is going on
[16:49] <mvo> cachio: pushed 8849 that hopefully catches this in the future
[16:50] <zyga> mvo: do you know if the wait in 8848 needs to be reproduced in any production code?
[16:52] <mup> PR snapd#8848 closed: tests: wait after creating partitions with sfdisk <Test Robustness> <Created by zyga> <Merged by zyga> <https://github.com/snapcore/snapd/pull/8848>
[16:52] <mup> PR snapd#8849 opened: tests: fail in setup_reflash_magic() if there is snapd state left <Test Robustness> <Created by mvo5> <https://github.com/snapcore/snapd/pull/8849>
[17:13] <mvo> zyga: I don't think we need this wait
[17:13] <zyga> mvo: which one?
[17:14] <mvo> zyga: the snap wait seed.loaded in setup_reflash_magic
[17:14] <zyga> mmm
[17:14] <mvo> zyga: I think that's not needed, snapd is purged earlier so it will never seed
[17:14] <mvo> zyga: in this code we also don't need to install core, we can just download it :)
[17:14] <zyga> heh, thanks for looking at this with fresh eyes
[17:14] <mvo> zyga: but let's only change it after we found the root cause of the other issue
[17:14]  * mvo really needs dinner now
[17:14] <zyga> I did knee-jerk reactions it seems
[17:30] <zyga> jdstrand: hey
[17:30] <zyga> I'm going to EOW soon
[17:30] <zyga> I'd love a +1/-1 on this test tweak
[17:30] <zyga> https://github.com/snapcore/snapd/pull/8803
[17:30] <mup> PR #8803: tests: port interfaces-many-core-provided to tests.session <Test Robustness> <Created by zyga> <https://github.com/snapcore/snapd/pull/8803>
[17:30] <zyga> so that I can end the week with no dbus-daemon leaking :)
[17:30] <zyga> it's a +2 PR but you need to look before it lands or doesn't land
[17:33] <pstolowski> pedronis: yes
[17:45]  * cachio app with kinesiologist
[18:24] <jdstrand> zyga: I commented such that you should be unblocked. I did not approve since I didn't pore over it
[18:26] <zyga> jdstrand: thank you :)
[18:26] <zyga> it's +2 so that's good
[18:32] <mup> PR snapd#8803 closed: tests: port interfaces-many-core-provided to tests.session <Test Robustness> <Created by zyga> <Merged by zyga> <https://github.com/snapcore/snapd/pull/8803>
[19:26] <hellsworth> hey folks is there a uc20 image that is not signed that will work in kvm?
[19:27] <roadmr> why unsigned?
[19:27] <hellsworth> well maybe i just need the signature :)
[19:27] <roadmr> 🖊
[19:28] <hellsworth> which of these *.gpg files would have been used to sign the image?
[19:28] <hellsworth> md5sums, sha1sums, or sha256sums
[19:28] <roadmr> none :)
[19:29] <roadmr> hellsworth: image signatures are contained in an assertion if I'm not mistaken, those are signed with a key that lives in the assertion service
[19:29] <hellsworth> hmm ok i just wanna boot uc20 in kvm..
[19:30] <hellsworth> i have a lovely qemu command that worked just fine for uc18 and not for uc20
[19:30] <roadmr> ohh I see
[19:30] <hellsworth> any advice would be helpful
[19:30] <roadmr> but you do have an uc20 image which you didn't build yourself?
[19:30] <hellsworth> correct
[19:30] <roadmr> that likely contains the correct signature
[19:30] <roadmr> why isn't it working? what do you see?
[19:35] <hellsworth> ok here is what i see: https://drive.google.com/file/d/1LLXq8kBPaNAqPc26ldAheBL-SH938936/view?usp=sharing
[19:36] <hellsworth> and my command to launch it is: https://paste.ubuntu.com/p/PD6pwtsZm8/
[19:38] <hellsworth> although i'm happy to use any method to boot a uc20 vm in qemu
[19:42] <hellsworth> maybe the answer is to just not use core20 in kvm and rely on the rpi..
[19:43] <roadmr> hellsworth: ohh I see, that doesn't look like a *snap* signature problem :/
[19:43] <roadmr> hellsworth: is it possible the uc20 image is somehow broken? is it a daily? ty using an older one?
[19:43] <hellsworth> i have
[19:43] <hellsworth> i've actually tried several images from the last month
[19:44] <hellsworth> all had the same problem
[19:44] <hellsworth> so i figured it was something *I* was doing
[19:44] <ijohnson> hey hellsworth
[19:45] <ijohnson> so you want to boot uc20 with qemu without tpm yeah ?
[19:45] <hellsworth> correct
[19:46] <ijohnson> hellsworth: the issue I see with your qemu cmd you posted is that you need to specify -bios to use uefi
[19:46] <roadmr> at last someone who knows what they're doing unlike me :)
[19:46] <hellsworth> roadmr: thanks so much for helping and learning with me though :)
[19:46] <hellsworth> okey dokey ijohnson i can toss that option in there..
[19:46] <roadmr> let me know if it works :)
[19:47] <ijohnson> install the ovmf pkg, then use OVMF_VARS.ms.fd from there
[19:47] <hellsworth> sorry but what's OVMF_VARS.ms.fd?
[19:48] <hellsworth> (ovmf is already installed)
[19:48] <ijohnson> hellsworth: that's the uefi vars you need to specify on the cmdline
[19:48] <ijohnson> sorry let me just show you the full cmdline
[19:48] <hellsworth> thanks :)
[19:48] <ijohnson> https://www.irccloud.com/pastebin/ximYskET/
[19:48] <ijohnson> so since you don't have tpm drop the
[19:48] <ijohnson>     -chardev socket,id=chrtpm,path=/var/snap/swtpm-mvo/current/swtpm-sock -tpmdev emulator,id=tpm0,chardev=chrtpm -device tpm-tis,tpmdev=tpm0 \
[19:48] <ijohnson> line
[19:49] <ijohnson> oh sorry also you will want to copy the OVMF_VARS.ms.fd from the /usr/share/OVMF to somewhere you can write to it
[19:49] <ijohnson> I just copy it to the current dir and use $(pwd)/...
[19:49] <ijohnson> does that all make snse?
[19:50] <hellsworth> oh yeah it does
[19:50] <hellsworth> thanks!
[19:50] <ijohnson> great let me know how it goes for you
[19:50] <ijohnson> hellsworth: also see https://docs.ubuntu.com/core/en/releases/uc20?_ga=2.193753581.1178879775.1591618991-2086332007.1562287587
[19:51] <hellsworth> /usr/share/OVMF/OVMF_VARS.ms.fd is a binary file so writing to it doesn't seem to be a thing
[19:52] <ijohnson> hellsworth: no you don't write to it
[19:52] <ijohnson> Qemu writes to it
[19:53] <cmatsuoka> I think I have a somewhat simpler qemu command line somewhere, let me see...
[19:53] <ijohnson> To store uefi vars
[19:53] <hellsworth> ijohnson: ah ok re qemu needing write perms
[19:53] <ijohnson> Right that's why you copy it
[19:53]  * ijohnson -> errands biab 
[19:56] <cmatsuoka> hellsworth: if you don't need encryption, this one also should work: https://pastebin.ubuntu.com/p/PZXjnvNDJ3/
[19:58] <hellsworth> ok neat thanks cmatsuoka ! i'm actaully trying the command that is in the docs under Testing UC20 https://docs.ubuntu.com/core/en/releases/uc20
[19:59] <hellsworth> good that the documented command is working :)
[19:59] <hellsworth> roadmr: the answers to my questions seemed to be here https://docs.ubuntu.com/core/en/releases/uc20
[20:00] <roadmr> \o/
[20:01] <cmatsuoka> hellsworth: nice, I think the line from the uc20 page is the same command with parameters in a different order
[20:01] <hellsworth> well it uses qemu-system-x86_64 instead of kvm directly
[20:01] <cmatsuoka> ah ok, yes, that part is different
[20:12] <hellsworth> so uc20 doesn't have dpkg or apt.. how do i install stuff?
[20:12] <cmatsuoka> aha
[20:12] <cmatsuoka> snap install
[20:12] <hellsworth> mm
[20:12] <hellsworth> right
[20:12] <hellsworth> of course
[20:13] <roadmr> that's the whole point, hellsworth !!!!! 😄
[20:13] <hellsworth> haha yes yes :)
[20:13] <hellsworth> it's jsut my brain's defaults :)
[20:13] <roadmr> whatever you do don't install crapsnap
[20:13] <roadmr> (I don't think I even have anything published on that one)
[20:14] <hellsworth> i'm just looking to install my locally built network-manager snap to test.. so this should be fine
[20:14] <roadmr> \o/
[20:14]  * cmatsuoka tries crapsnap
[20:14] <hellsworth> there is a crapsnap in stable and edge..
[20:15] <cmatsuoka> stable is good enough for this particular snap, I guess
[20:16] <cmatsuoka> oh no docker
[20:17] <hellsworth> it's interesting to me that the uc20 vm has networking because i'm obviously able to ssh into it. but it doesn't have the network-manager snap installed... if i install a new network-manager snap, then how do i know that *it* is being used/tested
[20:18] <roadmr> hellsworth: did you ask cwayne_'s team? they do just that kind of thing :)
[20:18] <hellsworth> hmm no.. what room is best to find that team in?
[20:21] <cmatsuoka> cachio: have you seen this error before? https://github.com/snapcore/snapd/pull/8824/checks?check_run_id=759081039
[20:21] <mup> PR #8824: many: move encryption and installer from snap-boostrap to gadget <UC20> <Created by cmatsuoka> <https://github.com/snapcore/snapd/pull/8824>
[20:21] <cachio> cmatsuoka, checking
[20:22] <cachio> cmatsuoka, no
[20:22] <roadmr> hellsworth: try #ce-certification-qa on canonical irc
[20:22] <cachio> I'll try to reprpoduce it
[20:22] <hellsworth> thanks roadmr
[20:22] <cmatsuoka> cachio: thanks! I'm not sure if it's deterministic or not
[21:10] <kyrofa> roadmr, I'm installing microk8s and I see that it's installing from a track that is not latest. Does that mean I can point latest at whatever I want now as a snap publisher? Any chance you know where the docs are for that?
[21:11] <roadmr> kyrofa: probably using "default" tracks
[21:12] <roadmr> kyrofa: yep, 1.18 has been marked as the default track, so trackless installs will install from (and follow) that track
[21:13] <roadmr> kyrofa: https://forum.snapcraft.io/t/behaviour-change-sticky-and-default-tracks/14970 for now I don't know if there's any more documentation
[21:13] <kyrofa> roadmr, how do you mark a track as default?
[21:13] <kyrofa> roadmr, ah, thank you
[21:13] <roadmr> kyrofa: it's a newish feature so if you do use it and have any comments/bugs please do let us know ;)
[21:17] <kyrofa> Will do :)
[21:57] <cachio> cmatsuoka, I couldn't reproduce the error
[21:57] <cachio> cmatsuoka, I'll try again
[21:57] <cmatsuoka> cachio: I suspected it was random, I'll re-run the test