[00:24] Is there some documentation about the future snapd & snapcraft ? Would like to help, if i can, but need more information before i change something. [00:28] sdhd-sascha: for snapd at least, there's various things on the forum with the backlog tag, but you might be best served by talking with mvo in EU morning, as mvo would know best on what would be something for you to work on, as we have a good number of features in flight all being worked on by somebody and mvo would know best what would be open [06:14] morning [07:22] zyga: hey, are you around? got a mystery for you https://github.com/wekan/wekan-snap/issues/103#issuecomment-572211692 [07:28] mvo: hey [07:30] hey mborzecki ! good morning [07:32] good morning [07:32] mborzecki: looking [07:32] hey zyga [07:33] zyga: can't find any other explanation other than /tmp not existing yet, but that doesn't make much sense [07:33] mborzecki: not having read all of the thread yet, do you know what is /tmp on the host? [07:33] is it a special filesystem? [07:34] zyga: not in my centos7 install [07:37] mborzecki: small note, private tmp is at /tmp/snap.$name/tmp [07:39] mborzecki: I wonder what changed that caused it to break [07:39] mborzecki: did we roll out snapd updates there? [07:40] zyga: as i understand it, the first report was that it broke after the refresh [07:41] of what? [07:41] of the snap or snapd? [07:43] zyga: i understood the snap was refreshed, but i can double check [07:43] ok [07:43] hmmm [07:43] immediately I cannot think of a reason [07:43] unless I remember my unix wrong [07:43] you socket() to make a socket [07:44] and you bind() it to create a file in the FS for AF_UNIX sockets that are not abstract [07:44] I cannot even imagine how that can cause ENOENT to be returned [07:45] zyga: afaik bind(/foo/bar/baz) can get you ENOENT if /foo/bar does not exist [07:45] sure but for /tmp? [07:45] that's super weird [07:46] ENOENT A component in the directory prefix of the socket pathname [07:46] does not exist. [07:46] I would do this: [07:46] create a small python snap [07:46] that just tries the same [07:46] ask the reporter to snap pack [07:46] and snap install it [07:46] and see what we get [07:46] I would also collect 1) kernel version 2) any virtualization/container system in use [07:51] good morning [07:52] hey sdhd-sascha [07:52] :-) [07:54] zyga: how do you guys handle different timezones in messages ? [07:54] sdhd-sascha: in which messages? [07:54] don't want make people to wake up [07:55] you mean on IRC? [07:55] I think everyone just handles that differently [07:55] yes [07:55] it's not like it's an IRC-alarm-clock next to bed :) [07:55] in my last position, i had my phone every second week loud. Because i could get emergence calls [07:56] zyga: yeah :-) [07:56] so, the phone is silent for most people? [07:57] emergency calls [07:58] sdhd-sascha: I don't get IRC notifications on my phone [07:58] cannot speak how others handle that [07:58] zyga: i have silent info on phone. But didn't have to time to watch. [07:59] afk [07:59] zyga: my last company was a medical laboratory. which works 24 / 7 [08:00] ijohnson: thank you :-) [08:01] morning [08:08] hey pawel [08:13] PR snapd#7966 closed: tests: first core20 test fixes [08:15] pstolowski: hey [08:17] hey pstolowski [08:18] zyga: heh, got a snap with app like this https://paste.ubuntu.com/p/xWjB2wc4m5/ declared as a service, can't reproduce across 50+ installs [08:19] Hello we are having problems accessing api.snapcraft.io, so we can't update or install any packages the way we are used to. forum.snapcraft.io works sometimes and sometimes not. pinging snapcraft.io doesn't work either for us. [08:20] We have this problem on all our ubuntu machines, any help on how to debug this? Could we have been blacklisted? [08:22] mvo: can you take a look at https://github.com/snapcore/snapd/pull/7967 ? it's a simple PR with some additional tests for snapstate backend and snapd snap scenario [08:22] PR #7967: overlord/snapstate: improve snapd snap backend link unit tests [08:23] re [08:23] mvo: opened a separate review so that the snapd on core will be smaller [08:26] mvo: if you have time, it would be nice to talk. [08:27] kristian_: hm, when did this start? maybe bloodearnest knows about connectivity issues with the store, I'm not aware of anything right now. [08:27] mborzecki: oh, nice. I have a look now [08:27] PR snapd#7772 closed: wrappers: write and undo snapd services on core [08:29] mborzecki: you said you opened a separate review - what number is that? [08:30] sdhd-sascha: hey, I think that can be arranged [08:30] mvo: i meant 7967 as separate review [08:31] mvo: the last bit is snapstate changes, will open that once 7967 lands (and 7772 just landed) [08:31] mvo: yeah :-) but it is in no hurry since I still have a few things to do [08:31] sdhd-sascha: ok - see /msg [08:31] mborzecki: nice [08:32] mvo: hi, thanks for reviewing Ian's PR, of course we need some parts of the early boot stuff after that change... we should discuss after the standup, I just fear we need to think through the complexities of the kernel info we put in the modeenv [08:34] #7940 needs a 2nd review [08:34] PR #7940: boot: implement SetNextBoot in terms of bootState.setNext [08:52] PR snapd#7968 opened: tests: enable regression suite on core20 [08:56] pedronis: thanks, let's discuss after the standup then [09:17] kristian_: mvo: hi, no known issues with store connectivity right now. [09:18] kristian_: can you run mtr api.snapcraft.io? [09:18] from your affected machines [09:22] PR snapd#7969 opened: snap: default to "--direct" in `snap known` for 2.43 [09:23] pedronis: the above pr is for the bit of snap known we talked about yesterday, should be the last missing bit before I can release 2.43 :) [09:23] ok [09:25] PR snapd#7940 closed: boot: implement SetNextBoot in terms of bootState.setNext [09:26] pedronis: probably best if Chipaca has a look at 7969 first, maybe my approach is too big of a hammer [09:26] * Chipaca likes big hammers and cannot lie [09:31] bloodearnest, yes I can [09:31] mose of them are loss 0%, some hosts are ??? [09:31] one is constantly at a loss around 99% [09:33] mvo: glad it's in the 2.43 branch :) [09:33] mvo: btw, now that BootParticipant has exactly one method we could probably just have boot.SetNextBoot but not urgent [09:40] PR snapd#7967 closed: overlord/snapstate: improve snapd snap backend link unit tests [09:42] kristian_: right, so you have a bad link in your path to our datacenter I'm afraid :( [09:47] bloodearnest, hmm which one? how can we fix that? is that something that our ISP has to do? [09:48] kristian_: your ISP, or one that they use or peer with - it's likely there's nothing you can do directly. Can you post the mtr output? [09:48] kristian_: is there a VPN involved at all in the path? [09:50] https://dpaste.org/BDvg [09:51] sorry it removes the tabs somehow [09:51] mborzecki: the distros using snap-mgmt don't have the issue in #7922, right ? [09:51] PR #7922: packaging, tests: stop services in prerm [09:51] bloodearnest, ^ [09:53] pedronis: yes, because snap-mgmt is part of snapd package and must be run before the package is fully removed (basically in prerm) [10:10] afk for 15 minutes [10:10] I should eat something [10:14] mvo: thank you for the talk :-) [10:15] PR snapcraft#2856 closed: meta: convert Application's `adapter` from string to enum [10:17] sdhd-sascha: my pleasure! [10:18] PR snapcraft#2857 closed: meta: enable Snap to be fully initialized with __init__ parameters [10:34] Back now [10:34] thanks Chipaca for the review [10:50] * Chipaca goes for coffee [10:50] kristian_: looks like there may be an issue with london backbone, I'm seeing ~40% loss on one link [10:50] nothing we can do about it but wait for it to be fixed [10:52] (our DC is in London, hence the affect) [10:54] kristian_: your report has short hostames (-w for long), but it looks like Level3 are having some issues, as my problem link is with them also [11:04] mvo: in 7624 the merge conflict is really wired - i will look for it [11:04] sdhd-sascha: aha, I can fix that for you [11:05] * Chipaca really goes for coffee [11:06] sdhd-sascha: I was also wondering if for 7624 we start with making "--direct" the default and then do a followup with the fix for the resume in indirect mode (does that make sense?) [11:06] mvo: well, it's okay. For me i need to understand the changes first ;-) [11:06] sdhd-sascha: ok, just let me know if I can help you with that one [11:07] mvo: i'm sure i will figure it out. But not soo fast, like you would do [11:08] mvo: i just started to merge, without understanding the changes ;-) like: `meld <(git show master:cmd/snap/cmd_download.go) <(git show cmd/snap/cmd_download.go)` ... and so on [11:09] sdhd-sascha: ok, good luck and let me know if you have questions! [11:09] sdhd-sascha: any help on this PR is welcome! [11:09] mvo: thank you - i love to help [11:13] a second review for 7969 would be great [11:20] mborzecki: hey [11:20] got a selinux denial [11:20] type=AVC msg=audit(01/09/20 11:09:57.766:971) : avc: denied { setgid } for pid=5731 comm=snap-discard-ns capability=setgid scontext=system_u:system_r:snappy_mount_t:s0 tcontext=system_u:system_r:snappy_mount_t:s0 tclass=capability permissive=1 [11:20] capability setgid [11:20] zyga: discard does setgid now? [11:20] could this be related to the fact that snap-confine is not setgid root now [11:21] and snap-confine calls snap-discard-ns sometimes [11:21] perhaps ahead of calling it it should setegid(0) [11:21] like it does for snap-update-ns [11:21] yeah [11:21] mborzecki: my question to you is what does this say [11:21] that setgid was attempted but denied? [11:21] or something else [11:22] zyga: exactly that, setgid was attempted by snappy_mount_t context, which isn't whitelisted, the command was snap-discard-ns [11:22] cool, thanks [11:22] zyga: s-c called s-d-n right? [11:22] yes [11:24] bloodearnest, so nothing we can do nor our ISP it's something an snappy's side? [11:25] the weird thing is that this is only a problem with our network, if we use a mobile network it works fine [11:29] * sdhd-sascha lunch [11:35] fixed and respawned selinux test [11:35] now let's commit this [11:40] PR snapd#7970 opened: snap-repair: use dirs.SnapSeedDir instead of seed.yaml [11:47] hey, does anybody use a git diff filter, so that git understand the language better ? since years, i want to test something like that. [11:53] are there any issues with snap store infra atm? keep getting "error: cannot refresh "firefox": unexpectedly empty response from the server (try again later)" from snap refresh firefox [11:57] mvo: #7970 is not enough, we'll fail later in initDeviceInfo [11:57] PR #7970: snap-repair: use dirs.SnapSeedDir instead of seed.yaml [11:58] (as I commented in the standup doc) [11:59] afk for 15-20 minutes [12:03] pedronis: yeah, I can close again, was a low-hanging fruit but maybe too low-hanging [12:04] seconds reviews for 7968, 7969 would be great, hopefully pretty easy both of them [12:06] PR snapd#7968 closed: tests: enable regression suite on core20 [12:07] mvo, hey [12:07] cachio: hey [12:07] cachio: hey, do the updates of arch images work ok now? [12:07] cachio: I made some progress on the tests, hopefully I have something to merge for you soon [12:08] mvo, I have this already open https://paste.ubuntu.com/p/gN6NW7Kng5/ [12:08] cachio: thanks for the merge [12:08] cachio: yeah, I talked to foundatins, it's a known bug, we can ignore it for now [12:08] mborzecki, I needed to use "pacman-key --refresh-keys" to fix the problem [12:08] the same I was using but now while preparing the image [12:09] mborzecki, your change also failed [12:10] mborzecki, now the image is being created well again [12:10] cachio: 7971 could be an interessting in-between step, we need your PR too, it enables a bunch more but with 7971 we can hopefully enable ~80% of the main tests [12:10] PR snapd#7971 opened: tests: enable a lot of the tests of main on uc20 [12:12] mvo, taking a look to 7971, thanks for the PR [12:12] PR snapd#7970 closed: snap-repair: use dirs.SnapSeedDir instead of seed.yaml [12:13] cachio: thank you! [12:13] cachio: I cherry picked some your work too, thanks for this! it will unfortunately create conflicts but I can help resolving those [12:13] mvo, no problem [12:14] we are approving 50 open prs again, yay, 2 pages are within reach :) [12:18] cachio: hah, glad that it's working now [12:22] zyga: you mention, that you run single spread test locally ? what cmdline do you type? [12:23] sdhd-sascha: typically something akin to [12:23] SPREAD_DEBUG_EACH=0 spread -debug -v google:ubuntu-16.04-64:tests/main/snap-run [12:23] for example [12:23] I pick the OS and the test to match the change I'm doing [12:24] you won't be able to use google backend yourself as that requires a key to spawn machines in gcloud [12:24] you can replace that with qemu: [12:24] and fetch or make a compatible image [12:24] local testing benefits tremendously from caching, otherwise it's almost entirely network bound [12:24] IO and CPU are not a significant factor in my experience [12:24] you also need about 20-40GB of free space [12:24] for qemu -snapshot [12:25] zyga: thank you - i understand :-) [12:26] sdhd-sascha: local testing also benefits from -reuse and -resend [12:26] as that saves some cost per run [12:27] sdhd-sascha: -reuse keeps the vm for another pass [12:27] sdhd-sascha: -resend sends updated tree [12:27] kristian_: the problem is neither your side or snappy side - it's in the middle I'm afraid. Mobile network likely uses a completely different path, and thus is unaffected. As are most folks, or else we'd be seeing alerts and more issues [12:28] zyga: updated tree ? did you mean the git tree ? [12:28] sdhd-sascha: updated tree, even without git commits [12:28] as you edit and change stuff [12:29] unless you use -resend you will not get any changes [12:29] ah, ok :-) [12:29] assuming you use -reuse :) [12:53] mvo: i didn't see anything about "--indirect". Just took the PR and merged the conflicted files. The rest was automerged by git. The tests are currently running locally. Manually black box test works with and without "--direct" option. [12:53] The merged source is here: https://github.com/sd-hd/snapd/tree/snap-donwload-via-snapd-2 [12:53] Don't know how to get the source into the PR (i think i didn't had the permissions) [12:53] What is about this test "TestDownloadViaSnapd" ??? Looks for me like a integration test. Should the Test-Server not be `mocked` or something ? I don't know how to mock in golang, yet. [12:57] PR snapd#7922 closed: packaging, tests: stop services in prerm [12:58] sdhd-sascha: we generally prefer using httptest.NewServer liberally vs trying to mock http stuff [12:58] pedronis: thank you :-) [12:59] how can i launch them, locally? [12:59] sdhd-sascha: launch what? [13:00] wait, .. i read about "httptest.NewServer" [13:00] I ask about launch, because i got a real http-query on snapd if i run the test, which contains this call `RedirectClientToTestServer` [13:01] ah, something is wrong then [13:01] that's the purpose of s.RedirectClientToTestServer [13:01] not to use a real snapd [13:02] thank you. [13:03] you can look how it is implemented in main_test.go [13:03] it is using httptest.NewServer, but that's probably not the issue [13:03] more the config bit, if redirecting is not working [13:17] PR snapd#7972 opened: overlord/snapstate, wrappers: undo of snapd on core [13:17] pstolowski: answered and/or addressed your comments in #7934 [13:17] PR #7934: o/ifacestate,o/devicestatate: merge gadget-connect logic into auto-connect [13:17] I tried a new alarm clock this morning which slowly got brighter like a sunrise which was great til the light turned off... That's not how the sun works [13:17] pedronis: ty, looking [13:25] ijohnson: :-D [13:28] PR snapcraft#2848 closed: common: generate run scripts which can execute independently [13:28] hmmmm [13:35] pedronis: thanks for the comment under 7972, i was kind of expecting this, the other version added new method to the snapstate backend iface but it was equally non transparent [13:35] mborzecki: as long as it was magical it was ok, but now if we have to ship state 3 level down the appeal starts to diminish [13:36] pedronis: maybe we can quickly chat about that before/after the standup? [13:36] mborzecki: maybe before, we have something UC20 to discuss after already [13:37] (afaik) [13:37] pedronis: ok, let me grab some coffe and i'll join the standup HO [13:37] lunch [13:37] and debugging [13:37] I broke up my patches [13:38] and now something fails :) oh well === ricab is now known as ricab|lunch [13:51] sdhd-sascha: cool, thank you, I have a meeting now, then I have a look [13:52] mvo: just this second i force pushed the test [13:52] mvo: but didn't look if all cases are handled ... [13:58] sdhd-sascha: quite possible, I remember that test was very tricky [13:59] sdhd-sascha: what I meant with "--indirect" was that in this PR maybe we should make "--direct" the default (that's what we have today) and instead of --indirect as an option to go via snapd. then we can fix all the missing bits in --indirect (resume of download is missing today) and once that is finished flip the default again. [14:02] ah, understood [14:06] sdhd-sascha: great, thank you! [14:07] mvo, hi about https://bugs.launchpad.net/snapd/+bug/1817276, did you see my last comment? [14:07] Bug #1817276: snapfuse use a lot of CPU inside containers [14:07] mvo: i can switch direct and indirect. if it should ? [14:07] mvo, I happen to have a running maas here on my laptop which was idle (I was doing other things) and suddenly squashfuse is using 100%cpu [14:08] ackk: I haven't, let me see (in a meeting right now, so will probably be a bit slow) [14:08] it's still going, laptop fan is going crazy [14:08] sdhd-sascha: yeah, that would be great [14:09] and i found the part that made it fail, yay! [14:11] ackk: can you see where snapfuse that is running and eating so much cpu comes from? i.e. which snapfuse binary path is used? [14:12] mvo, sigh, it's dead now [14:12] ackk: it's strange, we definitely need to debug this some more (our side) [14:12] ackk: oh no :/ [14:12] ackk: sorry for the trouble there :( [14:12] mvo, no probl [14:12] I can try again the scenario from my comment (this one was a different case) [14:13] mvo, I *think* the reason it went creazy is that maas went crazy respawing processes because it seems at times when I refresh the snap services from the old version don't get killed, so the new ones fail to start [14:13] mvo, (which might be a separate issue) [14:14] mvo, I had the snap installed with "try" from an unpacked snap, then I switched with refresh --amend to get the one from the store [14:22] ijohnson: reviewed [14:24] zyga: thanks will look after SU [14:31] mvo: just pushed into my repo, the default direct case . [14:31] mvo: the integration test needs fixes next [14:34] mvo, fwiw I couldn't reproduce the issue from my last comment now... [14:38] zyga: thank you. -resend and -reuse works great :-) [14:39] sdhd-sascha: some extra hints to look at: look at spread.yaml look for proxy settings, setting up a proxy for the classic packaging system (e.g. apt-cacher-ng) saves tons of tons of bandwidth and time [14:39] sdhd-sascha: you can go further and optimize time for big snaps like core and core18 by applying some tricks as well [14:45] zyga: did you have custom backend_* list for apt-cacher ? [14:45] backend? [14:45] no, I think not [14:45] just a stock value [14:45] it works good for rpm packages as well [14:45] ok, thank you [14:47] * zyga sees slow traffic to the store [14:49] bloodearnest, sorry I just don't understand - should I contact my ISP regarding this issue? Or is there absolutely nothing we can do? [14:55] installing core --edge takes forever now [14:56] is store in trouble? [14:56] status page looks fine [14:59] type=AVC msg=audit(01/09/20 14:58:04.383:746) : avc: denied { setgid } for pid=27854 comm=snap-discard-ns capability=setgid scontext=system_u:system_r:snappy_mount_t:s0 tcontext=system_u:system_r:snappy_mount_t:s0 tclass=capability permissive=1 [14:59] this won't go away :/ [14:59] hmm [14:59] need to think about why [14:59] I bet it's the lcok [14:59] *lock [15:01] ackk: thanks for the updates! still in meetings .( [15:09] mvo: all existing tests are now green and direct is default. I commented in the PR === ricab|lunch is now known as ricab [15:11] sdhd-sascha: nice, that sounds great! I'm still in meetings :/ but I will look as soon as I can. feel free to open a new PR based on the existing one with your updates though, that will means people like Chipaca can review while I sit here in meetings [15:11] green tests? eww [15:11] mvo: wait - ... i took your repo which was listed in the PR . [15:12] Chipaca: has passed - locally [15:12] * zyga tries and idea, runs spread and goes for a tea break [15:12] :) [15:14] * sdhd-sascha goes to work on own ubuntu-core and figuring out, how to split sway into reusable parts (maybe some more snapcraft-extensions) [15:28] * ijohnson takes quick break [15:42] 7969 needs a second review (pretty please :) [15:46] #7934 is also ready for a 2nd review, is not simple though [15:46] PR #7934: o/ifacestate,o/devicestatate: merge gadget-connect logic into auto-connect [15:53] PR snapd#7969 closed: snap: default to "--direct" in `snap known` for 2.43 [15:57] zyga: so the issue is that black is complaining about the unused variable in that python PR I have [15:57] zyga: is there a "pythonic" way to just have a variable exist without being used? [15:58] maybe `if var: pass` or something ? [15:59] cachio: hey did you or anybody else look at the dbus failures on uc20 (i.e. netplan) ? [16:00] ijohnson, sure, on master? [16:00] sdhd-sascha: hey, just had a quick look at your sd-hd/snapd:snap-download-via-snapd-2 changes, looks very reasonable, it seems like it has a base branch that is based on something other than snapcore/snapd which seems to confuse github but the commit looks fine, I can cherry pick them into the open PR 7624 [16:00] sdhd-sascha: thanks for working on this! [16:00] PR #7624: snap: make `snap download` download via snapd if available [16:00] cachio: no I think on your (or maybe mvo) PR to enable the full test suite on uc20 [16:00] cachio: so not on master [16:01] (yet) [16:01] ijohnson, in the mvo's branch, the netplan test is disabled [16:01] cachio: ok cool [16:01] on mine is not, but it is not gonna be merged [16:01] mvo: i can rebase it for you ;-) [16:01] cachio: so I take it no-one else has looked into it yet? [16:01] I'll take alook again today to that test [16:02] ijohnson, yes please [16:02] cachio: ok I will look today in the PM [16:02] I am still trying to fix tumbleweed [16:02] ijohnson, thanks!!! [16:03] mvo: wait - i took it from your repo and merged it with snapd:master ?. [16:03] * sdhd-sascha my wife is home - afk [16:03] ijohnson: is it really black that's complaining, rather than something like flake8? [16:03] cjwatson: hmm actually I'm not sure what is complaining let me look [16:04] I thought I had configured black but maybe it's something else [16:04] ijohnson: if it's flake8 then see http://flake8.pycqa.org/en/latest/user/violations.html#in-line-ignoring-errors perhaps [16:09] I guess it is flake8 that is complaining about the variables [16:10] ijohnson: 7971 should be ready, it's general flakiness that holds it back [16:10] ijohnson: but the netplan failure is strange, I just had no time yet to debug [16:11] mvo: I looked at the netplan error I think it's something more structurally wrong with dbus activated services on uc20 [16:11] could be something needs to be added to writable or some permissions are set wrong [16:11] not sure yet [16:11] ijohnson: aha, that makes sense. we can ask foundations for help here I guess [16:12] mvo: I will look at it a bit later today, but in the meantime could I get a quick review on https://github.com/snapcore/snapd/pull/7973 pretty please :-) ? [16:12] PR snapd#7973 opened: boot.go: split makebootable family into its own file [16:12] PR #7973: boot.go: split makebootable family into its own file [16:12] mvo: can somebody else look into this, I thought ijohnson has already a lot of UC20 things on his plate [16:12] pedronis: yeah, totally, he should not look into this, sorry if I gave this impression [16:13] (cc ijohnson -^) [16:13] ok, I will keep on uc20 [16:16] popey: you're better at figuring out this stuff than I am: last two comments on https://forum.snapcraft.io/t/6093 seem to be sensible answers but don't have much to do with the problem being discussed. Suspect future spammers. WDYT? [16:17] ijohnson: reviewed, thanks, couple of nitpicks [16:18] ack, working on it now [16:19] Chipaca: i do wish i was an admin and not just a mod on that forum. I dont have access to the same admin tools that help me spot these things as I do on the ubuntu discourse [16:19] popey: I'd give you admin if I could :) [16:19] * Chipaca is also just a moderator [16:20] in fact... [16:22] PR snapcraft#2860 opened: backport fixes for 3.9 [16:23] * Chipaca goes for a walk [16:27] PR snapd#7960 closed: tests: use unbuffered python output for daemons, misc formatting [16:32] ijohnson: btw, given there is already a baseBootSuite, it's probably not too hard to have a separate test fill too [16:32] *test file [16:33] pedronis: do you want to try and split that out now before making further changes? [16:33] ijohnson: as you prefer really, it's not crucial atm [16:33] just noticed [16:40] PR snapd#7974 opened: many: run black formatter on all python files [16:40] zyga: ok this is the last I'm going to do with the python stuff for a while, but that PR is just the formatter changes, no substantial code changes [16:41] * ijohnson switches back to uc20 === msalvatore_ is now known as msalvatore [16:41] PR snapd#7975 opened: release: 2.43 [17:19] pedronis: I agree I think it's a good idea to split the makebootable tests off too, I just created a PR stacked on top of the other one [17:19] opened as 7976 [17:20] PR snapd#7976 opened: boot: split makebootable tests into their own file [17:20] * zyga is doing homework with the kids and will return to resume work in about an hour [17:20] 2020-01-09 16:36:08 Successful tasks: 5582 [17:20] 2020-01-09 16:36:08 Aborted tasks: 0 [17:20] 2020-01-09 16:36:08 Failed tasks: 2 [17:20] - google:fedora-30-64:tests/main/selinux-clean [17:20] - google:fedora-31-64:tests/main/selinux-clean [17:21] PR snapd#7977 opened: snap: add (hidden) `snap download --indirect` option to download via snapd [17:22] sdhd-sascha: just fyi, I cherry-picked your --indirect things and pushed as 7977, thanks again for your help here [17:22] * mvo wonders how much Chipaca will hate the idea of snap download --indirect :) [17:24] mvo: i don't hate it [17:25] zyga: got logs? [17:31] Chipaca: excellent :) [17:56] PR snapd#7978 opened: data/selinux, test/main/selinux-clean: update the test to cover more scenarios [18:09] mvo: nice :-) [18:10] mvo: you work on the snap maas ? [18:10] MAAS [18:35] sdhd-sascha: i don't work on that ackk does [18:35] sdhd-sascha: and yeah, nice indeed, hopefully that can land soon [18:36] ok. I only want to know, if it's possible to mix a `apt install` with a `snap'ed maas` [18:36] if so - then i would add a `snap maas region` here [18:37] i mean `maas-rack` [18:37] PR snapd#7971 closed: tests: enable a lot of the tests of main on uc20 [18:48] PR snapd#7979 opened: boot: drop NameAndRevision, use snap.PlaceInfo instead === ijohnson is now known as ijohnson|lunch === ijohnson|lunch is now known as ijohnson [19:48] PR snapcraft#2861 opened: meta: remove Application's `prepend_command_chain` [19:51] * zyga returns to work on selinux policy bit [20:02] PR snapd#7625 closed: cmd/snap-confine: stop being setgid root [20:02] PR snapd#7980 opened: packaging,snap-confine: stop being segid root [22:02] Bug #1859084 opened: network-control interface seems to be broken on the Raspberry Pi 3 running Ubuntu Core 18 [23:19] PR snapcraft#2862 opened: python plugin: remove bzr workaround [23:37] PR snapcraft#2863 opened: spread tests: use source-depth: 1 for plainbox tests [23:52] PR snapd#7981 opened: snap-bootstrap: create encrypted partition [23:57] PR snapd#7723 closed: snap-bootstrap: create encrypted partition <â›” Blocked>