[00:42] PR snapd#7545 opened: cmd/model: don't show model with display-name inline w/ opts [05:17] morning [06:19] mvo: morning [06:19] PR snapd#7540 closed: interfaces/seccomp: query apparmor sandbox helper rather than aggregate info [06:22] hey mborzecki ! [06:22] mborzecki: how are tests looking today? [06:24] mvo: not worse than usual [06:25] ok, just wondering as I see a few reds from the night === tomreyn_ is now known as tomreyn [06:30] mborzecki: the timeout test failure is peculiar - do you have any idea why it happens? do we have more than 52s wait under certain conditions with our retry stratgy? [06:31] mvo: no clue yet, i can reproduce it with spread, but it doesn't make sense, there's snap find executed couple lines earlier and that works fine [06:34] mborzecki: confusing! [06:34] mborzecki: can you reproduce it by e.g. blocking the store with iptables and then running snap find locally? [06:34] mborzecki: I wonder if we use a different retry strategy for some command or something crazy like this [06:35] mvo: but then, across all the restarts, it happened only on opensuse [06:39] mvo: hmm daemon.go:316: DEBUG: pid=9079;uid=0;socket=/run/snapd.socket; GET /v2/find?scope=wide§ion=featured 1m0.45857103s 200 [06:41] mborzecki: oh, interessting! [06:42] Oct 02 06:36:47 oct020627-523151 snapd[8948]: retry.go:61: DEBUG: The retry loop for [06:42] https://api.snapcraft.io/api/v1/snaps/search?confinement=strict%2Cclassic&fields=anon_download_url%2Carchitecture%2Cchannel%2Cdownload_sha3_384%2Csummary%2Cdescription%2Cbinary_filesize%2Cdownload_url%2Clast_updated%2Cpackage_name%2Cprices%2Cpublisher%2Cratings_average%2Crevision%2Csnap_id%2Clicense%2Cbase%2Cmedia%2Csupport_url%2Ccontact%2Ctitle%2Ccontent%2Cversion%2Corigin%2Cdeveloper_id%2Cdevelop [06:42] er_name%2Cdeveloper_validation%2Cprivate%2Cconfinement%2Ccommon_ids&scope=wide§ion=featured finished after 1 retries, elapsed time=1m0.45638716s, status: 200 [06:43] mvo: hm so we don't really set a timeout for the store request context [06:43] tbh beats me why this consistently fails on opensuse [06:44] Gooooood morning [06:45] zyga: hey [06:45] This feels like a good day [06:45] Iā€™m taking Bit for a walk but I will be back shortly [06:45] Today is a cgroup day [06:47] mvo: i'll see if i can improve the error message maybe, 'context deadline exceeded' isn't really nice ux [06:47] mvo: and perhaps find should not use a timeout then [06:50] mborzecki: yeah, the ux improvements would be more than welcome [06:50] mborzecki: do we use no deadline when we do the store find api call in the daemon? or just a diffrent one? [07:02] mvo: hmmm, so we use httputil and set a http.Client.Timeout which is set in store.New() to 10s, yet the request somehow takes 1m [07:04] mborzecki: thats confusing === pstolowski|afk is now known as pstolowski [07:04] morning [07:04] mborzecki: also there was no retry, right? [07:05] mvo: no retry, it finished after the first attempt which was successful [07:05] pstolowski: hey [07:05] mborzecki: confusing++ [07:05] mborzecki: makes me wonder if our timeout is actually working [07:27] mborzecki: reviewing your schedule fix pr [07:33] pstolowski: thanks! better grab a coffe :P [07:34] mborzecki: yes, i'm on a 2nd coffee already ;) [07:35] mvo: trying with this snippet using httputil.Client https://paste.ubuntu.com/p/Xcc7bfxtKf/ timeout seems to work [07:36] mborzecki: even more puzzling [07:36] mborzecki: we use bundled deps on opensuse, right? it can't be some strange bug in some external go package packaged in opensuse? [07:42] mvo: so crazy thought, the search results are super long, and we run with SNAPD_HTTP_DEBUG=7, what if journald/systemd blocks reading from stderr? [07:44] mborzecki: could be! an interessting idea. it would be quite a bug though [07:45] mvo: hi, I made some high-level comments again in #7536, I didn't look at detail of the gadget code [07:45] PR #7536: gadget: accept system-seed role and ubuntu-data label [07:48] pedronis: thank you! yeah, it matches my understanding. [07:50] google:opensuse-15.1-64 .../tests/main/searching# snap find --section=featured [07:50] error: cannot list snaps: cannot communicate with server: Get http://localhost/v2/find?scope=wide§ion=featured: context deadline exceeded [07:50] mvo: ^^ heh [07:51] mvo: so, was able to trigger this by executing snap find a number of times in succession [07:51] mborzecki: but *just* on opensuse? [07:52] mvo: well, i have debug shell on opensuse open now :P [08:18] mborzecki: some comments/questions [08:20] Bug #1647256 changed: snap create-user cannot add a new user to an existing ubuntu core device [08:24] mvo, pedronis, mborzecki: https://forum.snapcraft.io/t/enabling-new-snapd-features-on-next-boot/13493 [08:25] this is a required intermediate step for refresh app awareness _and_ for parallel instances and perhaps also for parallel installs for classic [08:27] I'd like to start implementing this as I really need it to have the ability to do some of the cgroup operations reliably [08:50] zyga: so are you saiyng that some featue even after set will be on only after a reboot? [08:52] yes, yes he is [08:52] zyga: also it is really reboot or restart of snapd? [08:58] pedronis: AIUI it's a reboot because things need to be set up before apps / services are started [09:00] ehh, go errors are so bad [09:00] mborzecki: -1 [09:04] also our client error handling is kind of bad too, especially the retry thing, but if we start looking at the errors, trying to find out which one is temporary or not, we're again into go-errors-are-bad territory [09:05] mborzecki: client as in snapd/client/ ? or as in snapd/store/ http client usage? [09:05] Chipaca: snapd/client [09:05] mborzecki: ah. I'll be looking at the temporary vs not thing as part of download at some point [09:05] well, that was the plan, but if you're there already [09:07] pedronis: it is about really reboot [09:07] mborzecki: basically the retry thing should look at externalities (bah, the existence of the socket :) ) to know whether to actually retry or not [09:10] PR snapd#7546 opened: daemon: add a 'prune' debug action [09:11] zyga: you need to explain why because I don't know it, that topic says a lot of things except why exactly [09:12] it's very hard to judge if it's a good solution or not given the problem isn't clear [09:12] it sounds very complicated for sure, that I can say [09:13] pedronis: as I said on the post "The feature only really works reliably if enabled before anything using said feature is used". We need this to reliably track cgroups for all processes using the feature. We also need it to establish a tracking cgroup so that we can get release notification handlers set up. [09:14] zyga: please write down the details of the things we need to have, anyway is it related to version of snap-confine? or only install time stuff? or only boot time stuff? [09:14] pedronis: for parallel instances I think we need a new mount unit for /snap or we will have transition issues as well. [09:15] zyga: parallel instances for classic? in general? [09:15] zyga: as a principle it's better to check for precise things, that for something correlated to those things [09:15] pedronis: it is really about system-level machinations we perform, either we do or we don't at all - then the model is simple. If we sometimes do and sometimes don't because snapd was updated and apps are already running then there's all kinds of messy complexity that this solution avoids [09:16] zyga: this is not useful, your are talking about general principles [09:16] pedronis: it involves systemd units, snap-confine behavior (applied consistently) and snapd behavior which relies on it [09:16] I need the details of the single problems [09:16] in the forum topic [09:16] otherwise we are not going anywhere here [09:18] pedronis: there are multiple distinct ones, an existing one is where refresh app awareness misbehaves if enabled without rebooting because it doesn't known about processes that were started before the tracking was established [09:18] zyga: but tracking involves snap-confine code, no? [09:18] pedronis: the /snap mount propagation changes must happen before any systemd mount unit is started so it cannot be really enabled mid-way once the system is up and running [09:19] pedronis: yes, it involves snap-confine code [09:19] zyga: so don't think the solution is appropriate [09:19] pedronis: why? [09:19] zyga: because I can switch around snap-confine version without reboots [09:19] pedronis: note, it involves many other components as well, not just snap-confine [09:20] pedronis: so? [09:20] so the tracking gets broken [09:20] pedronis: how? [09:20] pedronis: I think you are trying to find a bug in the system [09:20] pedronis: the bug is already there [09:20] pedronis: this allows us to fix the bug once we want to _enable_ experimental features that are currently disabled when unset [09:20] pedronis: this allows us to _reliably_ make _that_ specific change [09:21] zyga: I disagree on the reliable part [09:21] pedronis: can you explain please? [09:22] zyga: please write in the forum info of the form. Feature x is reliable if : mount unit z looks like this, all currently running apps for snas where started by a snap-confine with tracking code etc [09:23] then we can disuss details [09:26] pedronis: context switch! we currently sub core for core16, do we have plans of doing the converse someday? [09:26] Chipaca: I was asking myself the same question looking at your code yesterday [09:26] :-) [09:27] Chipaca: in principle maybe, but I would write code for that now, you can put a TODO somewhere though [09:27] pedronis: the first generalization of baseUsedBy took a otherBases ...string, but I deemed it over-engineered for what we used it for [09:27] *I wouldn't write [09:27] sorry [09:27] yeah i gotcha [09:31] pedronis, Chipaca: note that if we want to substitute core16 for core we need new logic [09:32] in snap-confine at least [09:32] zyga: in snapstate as well [09:32] zyga: yes, I imagine we need various code everywhere [09:32] this also amplifies the effect of a bug that needs design resolution on the forum [09:32] (about snap tools) [09:32] that's why I said it doesn't make soon to add support for it just in Chipaca's new code [09:32] s/soon/sense/ [09:33] yeah, I think that should be done only after core18 tooling situation is stable [09:34] pedronis: as to your question, I'm responding to the forum thread but I will create a separate thread for the tracking features that in my eyes justify the need for such a system [09:34] pedronis: I'll give you a link when I have that in written form [09:34] pedronis: thank you for looking :) [09:35] mvo: I just noticed your status update in yellow [09:35] mvo: do you know if that was core18? [09:39] zyga: that is a very good question - cachio will know, all I have is https://paste.ubuntu.com/p/WpNftfG5G2/ and that its not a regression [09:39] * zyga looks [09:39] hmmmmm [09:40] heh [09:40] pedronis: zyga: core installed in a core18 system, and install a snap with base:core16, and then install core16, and then go to remove core, how far do you need to run? [09:40] the bug is: we never ran system-shutdown [09:40] mvo: ^ [09:40] mvo: please report that for tracking [09:40] or it will be lost in the rain [09:41] mvo: probably the glue logic for shutdown is missing [09:41] mvo: I forgot how we inject system-shutdown into initramfs [09:41] anyway, something for later [09:41] not for today [09:41] today I really want to start this new feature work [09:41] mvo: I updated #7462 to Paris decisions [09:41] mvo: what's that bastebin from? [09:41] PR #7462: asserts: introduce explicit support for grade for Core 20 models [09:42] mvo: that's missing things :-( [09:49] * mvo is in a meeting [09:51] * Chipaca hugs mvo for protecting us all from meetings [09:53] Chipaca: pastebin is from cachio [09:58] zyga, Chipaca: do you already know whats missing ? [10:01] mvo: I know what didn't happen, but I don't know what's missing without more context [10:01] Chipaca: ok [10:01] mvo: there's an involved little dance that needs to happen for the system shutdown helper to be called [10:01] it hasn't been called [10:01] so the dance didn't happen, or didn't work [10:02] or _something_ :) [10:02] Chipaca: ok [10:03] Chipaca: still in a meeting so can't poke at this right :/ [10:03] mvo: time is immaterial, or of the essence [10:10] Chipaca: pushed a small refactor to #7166 to actually make sense of the errors returned in the client. can you take another look? [10:10] PR #7166: client: add doTimeout to http.Client{Timeout} [10:11] mvo: a hunch, I can confirm in 10 minutes [10:14] mborzecki: lgtm [10:15] jamesh: thanks for the hint about cache, i've tweaked the mount namespace of the snap and mounted a clean tmpfs over /var/cache/fontconfig, the fonts appear correctly now (although at the cost of longer start time) [10:15] mvo: zyga: core18 seems to not have /lib/systemd/system/snapd.system-shutdown.service [10:15] that's the service that does the dance [10:15] why is it not there? [10:16] it was there before, I remember checking for this in core18 [10:16] Chipaca: thanks! [10:17] mborzecki: interestingly the snaps using gnome-3-26-1604 are running with newer fontconfig than those using gnome-3-28-1804 [10:17] It is a bit annoying to see that 2.14 is going to be different yet again though [10:17] jamesh: hm, perhaps updating gnome-3-28-1804 would be enough? [10:19] mborzecki: the 3-26 platform snap was built with a backport of new desktop libraries to 16.04, while 3-28 is just packaging what was in vanilla 18.04 [10:19] it's a bit harder to replace random libraries there [10:19] mvo: zyga: oh, it's at /etc/systemd/system/snapd.system-shutdown.service in core18 [10:20] mvo: zyga: straange, but ok [10:20] mvo: zyga: and if the pastebin is indeed from core18, I've just confirmed that core18 on amd64 does do the shutdown dance [10:22] mvo: zyga: http://r.chipaca.com/core18-shutdown.png [10:22] Chipaca: sorry, https://github.com/snapcore/snapd/pull/7445/files#r330474755 [10:22] PR #7445: overlord/snapstate/policy, etc: introduce policy, move canRemove to it [10:23] Chipaca: nice, thank you! so we need cachio to help with details, I think this was a on a pi [10:24] Chipaca: but that should not make a difference :/ [10:24] mvo: maybe it's a fake core image, and whatever does the dance to set up the snapd.system-shutdown.service unit is missing? [10:25] because that unit is in /lib in core that setting up for a core system wouldn't be necessary [10:25] Chipaca: that's interesting [10:25] Chipaca: it would be fun if the following were true [10:25] the fact that it's in /etc in core18 means something is adding it [10:25] Chipaca: system-shutdown is broken by our extra mount points set up to execute tests on core systems [10:25] seems like a strange decision [10:26] Chipaca: but I think we need to get sergio to just tell us what was executed [10:26] until then we know too little [10:34] speak of the devil [10:34] cachio: buen dĆ­a! [10:35] pedronis: which "other request"? [10:35] Chipaca: to enumerate the keys being set [10:35] ah [10:35] pedronis: yeah, it's weird that you don't get to see that [10:35] Chipaca, buen dia! [10:36] cachio: we've been puzzling over https://paste.ubuntu.com/p/WpNftfG5G2/ [10:36] cachio: what is the system running there? because it's missing something :) maybe [10:37] Chipaca, here you have more details https://paste.ubuntu.com/p/WmVCD8SvTz/ [10:37] it is pi3 running core 18 [10:37] that one is more interesting [10:37] Failed unmounting /var/lib/snapd. [10:38] cachio: that is expected [10:38] cachio: i think [10:38] cachio: but that pastebin is different from the other in a crucial way [10:38] snapd system-shutdown helper: started. [10:38] ^ those lines [10:40] cachio: that is: systemd _can't_ unmount everything cleanly because it's all very circular; only something completely static and mounted outside of /writable can do that. So the system-shutdown helper does that. [10:40] Chipaca, the short pastebin is what I got from the screen doing copy&paste [10:40] cachio: and the long one? [10:40] the long one I used screen tool to log it [10:40] with -L [10:40] cachio: but it's not the same run, is it? [10:40] i mean, the output is different [10:40] Chipaca, no [10:40] ok [10:40] different runs [10:41] cachio: so, the long one is fine AFAIK [10:41] cachio: the short one is not [10:41] cachio: what is the difference? [10:41] Chipaca, the short one is after running the tests [10:41] Chipaca, the long one is with a fresh image [10:42] zyga: is that what you meant about the test bind mounts breaking things? [10:42] Chipaca, is this expected? [FAILED] Failed unmounting /writable. [10:42] cachio: yes, that is expected [10:42] cachio: note further down: snapd system-shutdown helper: - was able to unmount writable cleanly [10:43] Chipaca: yes, I suspect our test preparation breaks this mechanism [10:44] Chipaca, it would be cool if we could teach systemd to omit that /writable message ... users notice the red more than the later message of the shutdown helper [10:44] (orthogonal to your discussion indeed) [10:45] ogra: I looked into that in the core timeline and didn't find a way [10:45] ogra: there might be a way now but i haven't re-checked [10:45] ah, k [10:46] (perhaps we could omit the writable mount unit from the start ... ? after all we dont really need it, it is just the silly systemd policy of creating usints for everyhing in fstab that creates it) [10:47] do we need it in fstab even [10:48] good question [10:49] (the initrd scripts generate fstab and add writable there ... someone would have to test if we could omit it) [11:05] ogra, Chipaca: systemd doesn't need fstab, it understands mount table directly [11:06] fstab is a way to convey extra options [11:06] ideally we would change things a little so that we can boot with just systemd created mount table [11:06] but that ship has sailed AFAIK [11:06] until perhaps core20 [11:06] mvo, xnox: is the experiment to boot with systemd in initrd still ongoing? [11:07] zyga: yes [11:07] xnox: is it promising? [11:08] zyga: however, systemd does require either -.mount or like / declared in fstab. [11:08] zyga: hence e.g. our lxd containers declare that in fstab [11:08] xnox: _or_ specific declaration in GPT [11:08] xnox: then you can boot without /etc [11:09] zyga: -.mount unit needs to exist, whether gpt-generator or fstab-generator created it, doesn't matter, that is true. or if -.mount is written in /usr/lib [11:09] xnox, the prob we have is a loopback mounted / ... where on shutdown systemd tries to pull out the carpet underneath the rootfs by trying to unmount /writable [11:09] xnox: yeah, agreed [11:10] ogra: well, use finalrd to pivot into shutdown initrd which then has everything in ram to kill all the things, and unmount all the things, cleanly. despite layering. [11:10] ogra: or declare LazyUnmount=yes on /writable [11:10] mm [11:10] xnox, on shutdown you go back into intird nowadays? [11:11] initrd is removed on boot (as in removed from memory) but I may be mistaken [11:11] how would you declare LazyUnmount=yes ? the mount unit is created by a generator ... [11:12] so that generator would have to know that /writable is a special thing while creating the unit [11:12] ogra: I think we don't need lazy, we need to describe how we actually booted in the first place -- I think then systemd should be able to undo that [11:12] (but that sounds like a possible solution actually) [11:12] but anyway, I have things to do, this is in good hands [11:12] zyga, not sure because what we do is actually very hackish [11:13] ogra: let's just make ubunt-core cloud native: let's boot core in a small container created by a regular system ;) [11:15] s/regular system/initrd/ ... [11:15] :) [11:15] zyga: ogra: not back into initrd, but pivot_root to /run/initramfs/ something that looks like initrd, is in ram, but has /shutdown instead of /init [11:15] $ apt install finalrd [11:15] yeah [11:15] $ sudo finalrd [11:15] xnox: this matches my memory [11:15] and checkout what's in /run/initramfs/ [11:16] xnox: zyga: ogra: snapd.system-shutdown.service does the run/initramfs thing [11:16] not initramfs but whatever [11:16] yep [11:16] it sets up things to be pivoted into [11:16] that bit is fine [11:16] xnox, oooh, thats sexy ... ! whose baby is that, yours ? [11:16] Chipaca: yeah, but as discussed with mvo need to check if that does everything / enought / right [11:16] (yesterday on a different call with shutdown issues) [11:17] xnox: i hadn't heard that, but ok [11:17] maybe LazyUnmount is all that's needed for the angry red to go away [11:17] * Chipaca tries [11:17] yeah, but how to get it into the unit ? [11:18] you'd have to hack the generator and special-case /writable [11:18] Chipaca can create /run/systemd/system/writable.mount.d/lazy.conf with [Mount] LazyUnmount=yes [11:18] ogra: it's a unit.... and supports drop-ins..... like everything [11:18] oh, indeed ! [11:19] yep am already trying that [11:19] in /etc not /run but w/e [11:20] xnox: also also why is bash-completion not in core18 :-( [11:20] the question is ... does that at least call sync to make sure ram is flushed to disk ? so we dont end up with fs corruption [11:20] ogra: system-shutdown helper does the actual unmounting [11:21] Chipaca, i know but in case of Lazy... would we still need the shutdown helper if systemd calls sync ? [11:22] ogra: systemd can't unmount /writable [11:22] it _can't_ [11:22] systemd is _running_ from /writable [11:22] from a loopback filesystem mounted from /writable to / [11:22] well, i know :) [11:23] but its fine to not unmount if you are able to flush everything to disk [11:24] adding LazyUnmount=yes to /writable leads to system-shutdown helper not being able to find it [11:25] so that one is a no [11:25] ehhh [11:25] maybe i should let other people worry about this [11:25] * Chipaca has other stuff to worry about [11:27] Chipaca: what is your original issue? maybe i can take over it? [11:28] xnox: nothing serious: core18 (as core) prints a nasty WARNING about not being able to unmount /writable [11:28] Chipaca: on shutdown, right? [11:28] xnox: that is worrisome for users [11:28] Chipaca: i think that is fixable. [11:28] shutdown/halt/reboot/etc [11:29] gotcha [11:29] xnox: the system-shutdown helper then kicks in and makes sure everything is alright [11:29] OTOH, and the point where I would have to sink time into this, is whether this LazyUnmount thing actually does the right thing somehow? [11:29] dunno [11:29] Chipaca: which is same as on live media today, when booted with casper. There are similar warnings about not being able to unmount /cdrom which is like the same thing. [11:29] if the shutdown helper is no longer needed, huzzzah :) [11:30] xnox: yeah [11:30] Chipaca: the other alternatives i was looking at, is to declare [Unit] DefaultDependencies=no => such that initial run of unmounts, doesn't try to unmount writable at all. [11:30] but the later helper undoes it all [11:32] xnox: that does seem to work [11:33] at least wrt not printing the warning [11:33] the system-shutdown helper logging sometime doesn't make it to the terminal, for some reason, which is weird [11:33] cachio: ^^^ that's relevant to your pastebin [11:33] sometimes things don't appear where i expect them to be :-| [11:45] Chipaca, i'll try to run tests 1 by 1 and reboot to see if at some point we see that [11:49] pstolowski: I reviewed #7518 , couple things look a bit off [11:49] PR #7518: cmd/snap: sort tasks in snap debug timings output [11:50] pedronis: ty [12:07] lunch, you say? [12:08] * Chipaca bows to the inevitable === ricab is now known as ricab|lunch [13:17] PR snapd#7166 closed: client: add doTimeout to http.Client{Timeout} [13:30] PR snapd#7527 closed: interfaces/input: support confined snaps accessing keyboard and mouse directly [13:36] "take up running", they said. "it's healthy", they said. [13:37] Chipaca: who said that theyre dumb [13:38] that was my knee talking, you may disregard [14:11] PR snapd#7546 closed: daemon: add a 'prune' debug action === ricab|lunch is now known as ricab [14:48] cachio: about the 'snap logs' issue, i think the test is buggy [14:48] cachio: it assumes the only lines in the log come from the unit, but they can also be from systemd [14:48] cachio: and the systemd ones happen 'at random' (not really random, but from the PoV of the test they are unpredictable) [14:56] * Chipaca adds support for refresh.metered=hodl [15:00] hodl ! [15:01] hodor [15:01] i submitted a form to register the "maas-cli" snap in snapcraft.io (being that its reserved), but I have not received a response and its been a few days [15:01] did I go about that the correct way? [15:02] roadmr: is that a you? ^ [15:02] * roadmr ducks behind the nearest shrub [15:03] Chipaca, blake_r : let me check the queue [15:03] just use Chipaca's hodl to hide behind [15:05] blake_r: I see blake-maas-cli which is approved [15:06] roadmr: yeah the form required me to use that name [15:06] roadmr: but I really wanted maas-cli [15:06] roadmr: if you could have it owned by Canonical, that would be good as well ;-) [15:06] roadmr: as the "maas" snap is going to use a content interface to the maas-cli snap [15:07] blake_r: I can't rename a snap; can you please request maas-cli again? if it asks you to file a dispute, please do so, then I can approve it and give you the name [15:07] i tried that, but the form hit a validation error [15:07] let me try again [15:07] blake_r: try this form: https://dashboard.snapcraft.io/register-snap/ [15:08] roadmr: hmm worked that time [15:08] roadmr: should be there [15:08] šŸ’Ŗ [15:08] roadmr: can the blake-maas-cli be removed? [15:09] blake_r: I can revoke it [15:09] roadmr: can you also do "meta-maas-blake"? [15:09] revoke that one? sure [15:09] roadmr: sure just to make my snapcraft account clean! [15:09] roadmr: thank you [15:09] roadmr: now how do I go about getting the maas-cli to be owned by Canonical [15:09] roadmr: do I need to push a snap first? [15:10] blake_r: we transfer it to the canonical account; we typically only do that once there's a snap there, preferrably on stable. Fine to keep under your account while the snap is still developer-grade [15:10] blake_r: (to be clear, you ask us to transfer it, preferrably via snap-store-admins@lists.canonical.com) [15:10] please :) [15:11] roadmr: will do [15:11] thanks :) [15:11] roadmr: thank you! [15:12] pedronis_: running test with the new cgroup idea [15:13] blake_r: meta-maas-blake is in some weird state, it'll take me a bit to fix it. [15:15] Chipaca, ok, so I'll update the test [15:15] Chipaca, thanks for the info [15:20] blake_r: I think I fixed meta-maas-blake [15:20] roadmr: yeah I see its gone, thank! [15:23] PR snapd#7547 opened: many: use a dedicated named cgroup hierarchy for tracking [15:25] who knows about the snap store proxy these days? [15:25] zyga: nice, surprisingly small [15:26] it has some clenaups that I can fork off and do separately [15:26] still working on it though :) [15:26] PR snapd#7548 opened: tests: update snap logs to match for "test-snapd-service" instead of "running" [15:26] but that's bulk of the idea [15:27] I have an update to my snap, I incremented the version in the yaml, built a new snap and pushed it. But the version on the listing page did not increment. What is the correct way to do this ? [15:27] oh, github has added multi-line comments! [15:27] joeubuntu: what is the listing page? [15:28] hey zyga https://snapcraft.io/teamtime [15:28] joeubuntu: did you release the pushed revision? [15:30] snap info temtime shows a revision from today in stable [15:30] *teamtime [15:30] Chipaca maybe :) [15:30] ogra: yes but it's revision 1 [15:30] * zyga is hungry for real lunch [15:30] ogra, but if I install it , it pushes the old version . [15:30] that breakfast-for-lunch was not a good idea [15:30] joeubuntu: the revision in stable is revision 1 [15:30] joeubuntu: that's not an update of anything :) [15:31] :-) [15:31] Chipaca , how would I increment that? I pushed a 2.0 up [15:31] joeubuntu: release [15:31] zyga, try brunch-for-dinner instead ... way better ... [15:31] joeubuntu: snapcraft list-revisions teamtime [15:31] joeubuntu: identify the revision you want to be in stable [15:32] joeubuntu: then, snapcraft release teamtime stable === pedronis_ is now known as pedronis [15:32] joeubuntu: there are flags on push to do this in a single step [15:32] ie snapcraft push --release= the.snap [15:32] cool. it worked thanks Chipaca ! [15:33] shocking :) [15:33] great, thanks for the help. [15:33] joeubuntu: this was all a marketing ploy wasn't it [15:33] I think looking at my revision history you'd see it took me at least 4 tries :) [15:35] Chipaca, after running the snapd-failover test the reboot is not triggering the snapd system-shutdown helper anymore [15:36] Chipaca, do you know if is it any way to restore it [15:36] so I don't need to reflash the sd card [15:43] cachio: zyga was the one with insight into it not working i think [15:43] wrt mounts? [15:51] * zyga runs [15:51] I didn't look at specific [15:51] just said it felt like something we would break [15:51] ;-) [15:51] it may also [15:51] indicate [15:51] that there's a real bug [15:51] unrelated to the test itself [15:56] fixed a bunch of silly stuff, trying again [15:56] I think refresh-app-awareness test will now pass [15:56] the management scripts may need tweaking to unmount that though [15:56] * zyga adds a todo [16:01] Chipaca or mvo am I good to merge #7545 now? I fixed the issue with the spread test and it's green with 3 approvals [16:01] PR #7545: cmd/model: don't show model with display-name inline w/ opts [16:02] zyga, after run any test the is not triggering the snapd system-shutdown helper anymore [16:02] PR snapd#7545 closed: cmd/model: don't show model with display-name inline w/ opts [16:02] but if I restart the board again [16:02] ijohnson: I have not looked and I'm in a meeting but if it has 3 +1 - sounds like it [16:02] ah well pedronis merged it, thanks! [16:03] the is not triggering the snapd system-shutdown helper is triggered [16:03] cachio: probably something nukes the shutdown state [16:03] zyga, yes [16:03] I'll continue with this after lunch [16:03] * cachio lunch === pstolowski is now known as pstolowski|af === pstolowski|af is now known as pstolowski|afk [16:29] and now it passes, cool [16:29] ok, wrapping up soon [16:30] need to adjust mount-ns test to reflect this [16:30] but I'm super positive this is the way forward === ricab_ is now known as ricab [16:39] mvo: https://github.com/snapcore/snapd/pull/7547 pushed [16:39] PR #7547: many: use a dedicated named cgroup hierarchy for tracking [16:39] it should pass all tests (we'll see what happens) [16:39] I'll work with maciek on it tomorrow [16:39] probably chop it into a smaller part to focus on the essential [16:39] and add some more tests [16:39] next up: the release agent :) [16:40] this is so cool [16:46] pedronis: ^ if you want to take a peek [16:46] pedronis: I added a TODO to the pull request [16:48] * zyga EODs [16:48] enjoy your evenings :) === ijohnson is now known as ijohnson|lunch [18:12] i got a content interface question [18:12] so i created a content interface where i want to expose the maas-cli executable to the maas snap [18:12] the interface works correctly [18:12] but when the maas snap calls the maas-cli binary none of the SNAP_ are set for the maas-cli they are from the maas snap instead [18:13] any ideas on how I can get the environment to be the maas-cli SNAP when calling the binary from the maas snap [18:14] blake_r: sounds like a question for the forum [18:14] blake_r: also note _most_ (but not all!) of the snapd core team is in eu timezone [18:14] blake_r: but that question in particular is best answered by our tame polish horde [18:15] probably [18:15] anyway, i'm not here, i'm having dinner [18:15] yeah your messages are invisible ;-) [18:15] enjoy dinner [18:32] blake_r: hey [18:33] blake_r: can you take a step back [18:33] blake_r: and tell me what you want to achieve [18:33] goal is to keep the "maas-cli" in its own snap [18:33] when installing "maas" snap it will also install "maas-cli" using content interface [18:33] blake_r: while you can change or just ignore SNAP_ environment variables it won't change what happens at runtime, the program will run with the permission of the snap it is used from, not the snap it is coming from [18:34] zyga: is it possible to allow a strict snap to call another strict snaps executable? from the system root per say [18:35] so like /snap/bin/maas to call /snap/bin/maas-cli ? [18:37] yes, there are snaps doing that (the wine content snap comes to mind) [18:38] okay I will give it a try [18:38] that might be all that is needed, if that is possible [18:43] https://paste.ubuntu.com/p/qCFGsVPtxb/ [18:43] ogra: doesn't seem possible [18:47] rbasak: I guess I'm way too late but I was wondering if you might have a look at the snapd sru in the unapproved queue? [18:49] re [18:49] blake_r: no, not today [18:49] blake_r: you cannot do that with the usual semantics of "that other snap is running" [18:49] blake_r: you can at most share the bits and run them as yourself [18:49] zyga: okay, will try to get it to work by just importing the python library [18:49] blake_r: if you need to share things more you need an API and you need to make requests to a service [18:49] zyga: i think that will work, as that is all that is really needed [18:51] perhaps try talking to @mmtrt on the forum ... https://forum.snapcraft.io/t/auto-connections-of-wine-base-stable-wine-base-devel-and-wine-base-staging/11229 [18:52] he provides snaps that layer one on top of each other to make wine applications work via content sharing [18:52] like an onion :) [18:53] (might also make you cry ... not sure :) ) === ijohnson|lunch is now known as ijohnson [20:53] PR core18#140 opened: run-snapd-from-snap: check for snapd.service existing too [21:26] * cachio eOD