[03:36] hmm, there's something very wrong with auto-connected interfaces here [03:36] "snap install lxd" on a completely clean system (no core snap yet) leads to none of our interfaces being connected [03:36] removing the snap and reinstalling it then gets things connected properly [03:47] https://forum.snapcraft.io/t/auto-connected-interfaces-missing-on-initial-snap-install/4850 [03:48] this is obviously rather critical [04:15] confirmed to affect all snaps on initial install (when no core snap is present) [04:56] morning [04:58] mborzecki: good morning, looks like you're the first snap person around today, you may want to take a look at https://forum.snapcraft.io/t/auto-connected-interfaces-missing-on-initial-snap-install/4850 [05:01] stgraber: looking [05:16] stgraber: working locally with latest snapd, even if i purge everything and start with clean state, i'm trying cloud image now [05:17] mborzecki: 16.04 cloud image should be affected, that's effectively what MAAS uses [05:18] mborzecki: it may or may not be related to re-exec, so latest snapd on your system may get you a different result, not sure [05:20] stgraber: yeah, something is off: https://paste.ubuntu.com/p/6P4JpMhgPf/ [05:21] that's on cloud image [05:21] yup, matches what I'm seeing here and what our users have been reporting [05:23] and that's my host: https://paste.ubuntu.com/p/hDzWMgY2zb/ [05:48] stgraber: left a note, i think the approach needs to be discussed with mvo/pstolowski/pedronis [05:48] stgraber: refreshing the core snap first should workaround the problem [05:49] mvo: speaking of mvo :P [05:49] mvo: morning [05:50] mborzecki: goooood morning [05:50] mborzecki: what did I miss :) [05:50] ? [05:51] mvo: stgraber reported a problem with auto-connect https://forum.snapcraft.io/t/auto-connected-interfaces-missing-on-initial-snap-install/4850 [05:51] mvo: i looked around and my observation is that we're missing the 'auto-connect' task because the task list is created using the old 'snapd' [05:53] mborzecki: yeah, I was just reading the forum [05:53] mborzecki: I think your analysis makes sense, that is most likely the issue [05:54] mvo: i looked in state.json and there's no auto-connect when old snapd creates the task list [05:54] mborzecki: yeah, this makes sense. so its the race when you don't have a core snap and it restarts itself [05:55] mvo: do you think we could somehow patch it when core updates? [05:57] mborzecki: yeah, I think we need to add compat code into the "old" task (setup-profiles) [05:58] mborzecki: I mean, something like (in setup-profiles): scan the task list and if the task is missing either inject it or run it [05:58] mvo: otoh, maybe we should rewrite the whole pending task list if core is updated in the process [05:58] mborzecki: we will need pawel here [05:59] mborzecki: yeah, its a bit of a generic problem [06:01] PR snapd#4972 opened: cmd/snap-mgmt: remove timers, udev rules, dbus policy files [06:20] mborzecki: I followed up in the forum, lets discuss with pawel once he is here [06:20] mvo: ok [06:30] good morning [06:31] * zyga catches up with IRC [06:31] stgraber: ack, [06:32] oh, good analysis indeed mborzecki [06:32] I would love if we could formalize the reexec protocol limitations somewhere [06:32] zyga: hey, morning [06:32] so we don't run into this willy nilly again [06:32] mvo: do you consider this a release blocker? [06:41] zyga: I think we've satisfied everyone's requirements as far as my internet check PR goes [06:41] zyga: also, could you please change leap version in: https://github.com/canonical-docs/snappy-docs/pull/380 [06:41] PR canonical-docs/snappy-docs#380: Use address --refresh on openSUSE [06:42] from 42.2 to 42.3 [06:42] I would have submitted my own, but then it would conflict with yours [06:42] Caelum: yes, I will do that now [06:42] thank you [06:43] Caelum: I prepared an update that enables apparmor but I need to send a mail to suse security team and discuss some topics [06:43] it's pretty complex how all those things interact [06:43] nice [06:43] the goal is to land an apparmor-enabled version into the system:snappy and start the discussion with the security team on how to add snapd to factory proper [06:44] that would be awesome [06:44] zyga: good morning! yes, I think this is a blocker [06:45] zyga: I wonder if we need to re-think re-exec, maybe something like: if there is a new core, always *only* referesh that then re-exec and do the rest [06:45] mvo: yes, I think this makes a lot of sense [06:46] mvo: or introduce a formal protocol where snapd "new" can re-interpret partially finished tasks [06:46] mvo: but whatever we do needs to work all the way back [06:46] or we would always have to adjust old versions [06:46] or provide updates/backports [06:47] I think doing this on the "new" side would be conceptually harder but easier to deploy [06:47] we could also do both sides so in the future it is easier in general [06:47] Caelum: PR updated [06:49] mvo: in this case, do we have enough information to synthesize new tasks on snapd startup? [06:49] mvo: cook old snapd state, start overlord, look at state; [06:49] zyga: awesome thank you! [06:49] mvo: the new state should include the desired set of tasks [06:49] zyga: I think so, in this case I think we can just insert a auto-connect task [06:49] Caelum: I'll poke people to merge it soon [06:49] Caelum: thank *you* :-) [06:50] yes, that's my thinking [06:50] zyga: and then we need to discuss a more general approach [06:50] it should be simple enough for this one-off case [06:50] zyga: its definitely making things very complicated (re-exec) [06:50] zyga: yeah [06:50] but yes, definitely a topic for discussion [06:50] mvo: it's so much simpler to test and evaluate in distros where we don't support it [06:50] mvo: so I agree totally [06:50] zyga: I want to wait for pawel but hopefully its easy, we have code for task injecton already [06:50] mvo: alternative, make shallow snapd [06:51] mvo: that doesn't do stuff [06:51] just reexecs [06:51] sounds good, I'll get back to fstab [06:51] zyga: yeah, but even with the shallow one we will need to deal with ugprades [06:51] zyga: i.e. when to re-exec on upgrades [06:52] zyga: but yeah, a much bigger topic that does not fit well into irc :) [06:52] shallow one would not contain any logic apart from "fetch core in super generic way, re-exec" [06:52] zyga: sure, that would fix step 0. but what if we have full snapd and refresh to full snapd+1 with new tasks [06:53] ohh [06:53] yes [07:12] morning [07:15] moin moin [07:15] o/ pstolowski [07:17] hey pawel [07:17] we have some important work for you [07:17] pstolowski: thanks for your reply in the forum! [07:18] zyga: the auto-connect issue? [07:19] indeed [07:19] mvo: https://github.com/snapcore/snapd/pull/4973 [07:19] PR #4973: osutil: fix fstab parser to allow for # in field values [07:19] I'll make a backport after this lands [07:19] PR snapd#4973 opened: osutil: fix fstab parser to allow for # in field values [07:20] zyga: ta [07:24] mborzecki: https://github.com/snapcore/snapd/pull/4938 updated [07:24] PR #4938: release-tools: add repack-debian-tarball.sh [07:29] 4969 needs a second review [07:32] zyga: yeah, i find `find .. -exec` utterly confusing syntax wise and replace it with `| xargs` where possible (also xargs has the nice -P switch which i abuse when possible) [07:37] mborzecki: but find ... -exec works for any number of files; xargs will hit the kernel argument size limit eventually [07:40] pstolowski: 4968 + 1 but add a test please [07:42] zyga: will do, thanks === pbek_ is now known as pbek [07:43] mvo is https://github.com/snapcore/snapd/pull/4951 a 2.32 backport candidate [07:43] PR #4951: interfaces/desktop-legacy: allow access to gnome-shell screenshot/screencast [07:44] it needs gustao review [07:44] zyga: it is, but blocked right now [07:46] it seems we need a new tests that is not about upgrades but about starting installing from stable deb [07:48] pedronis: indeed [07:57] PR snapd#4974 opened: ifacestate: injectTasks helper [07:58] mvo zyga pedronis can you please take a look at ^ ? it's a helper that I initially meant to propose with interface hooks, but it's going to be useful now for auto-connect fix [07:59] ack [07:59] * zyga sees a curious error: [07:59] moin moin [07:59] https://www.irccloud.com/pastebin/pVrMuty3/ [07:59] Chipaca: hey [08:00] pstolowski: did you see zyga's comment (somewhere, on the forum maybe?) about making sure core is installed before doing anything further? wouldn't this solve the autoconnect thing? [08:00] Chipaca: this is about core [08:02] Chipaca: it's installed first, is just that the next install is not setup properly [08:02] pedronis: because the tasks are created by the old snapd? [08:02] yes [08:03] the other issue is a bit different [08:03] k [08:03] is about trying to run hooks while core is not active [08:03] that needs to wait in some form === fnordahl_ is now known as fnordahl [08:04] Chipaca: to be clear, I should probably write this on the forum, but first doesn't mean a lot in our world, we either need to setup dependecies or wait [08:05] you can always start an install and a different change at the same time [08:05] mvo: https://github.com/snapcore/snapd/pull/4973 updated [08:05] PR #4973: osutil: fix fstab parser to allow for # in field values [08:05] zyga: ta! [08:05] Chipaca: https://github.com/snapcore/snapd/pull/4938 needs your re-review [08:05] PR #4938: release-tools: add repack-debian-tarball.sh [08:06] oh [08:06] zyga: no it doesn't [08:06] you just did [08:06] =) [08:06] thanks :) [08:06] despite you telling me your comment was *fine* when it wasn't :-) === tinwood_ is now known as tinwood [08:06] sorry ^_^ [08:06] I changed this a few times locally and I forgot what it was then [08:07] 'twas fine, just confusing (but you got it sorted in the end) [08:07] with that merged we will just have 6 more scripts from mvo's stash to port ;) [08:07] pstolowski: are you looking also into writing the spread? start with current stable deb, and install a snap that needs autoconnections , probably needs the fakestore to use the latest code for snapd [08:08] zyga: *cough* [08:08] zyga: I'm looking at this as well now [08:08] well, ok, 9 [08:11] mvo: I have a small README file to add there next [08:12] wow, it's really sprint this time, +16 in the shade and sunny [08:12] zyga: thanks, added a small comment [08:12] mborzecki: it's bound to be 22 next monday [08:13] I cannot wait for coding sessions in the park :) [08:14] zyga: mvo: you both make good points about cpuinfo [08:14] zyga: mvo: note however I am doing what apport does, there [08:14] hah, can't wait to start bicycling again [08:15] zyga: mvo: and what it does is fine, i think: if it doesn't look like the x86 ones, it ships the whole file [08:15] (and the non-x86 ones i've seen so far are saner in this sense) [08:15] Chipaca: yeah, I think its fine. also x86 will be the 99% [08:15] (for now at least) [08:16] mvo: non-x86 is where a lot more bugs happen though :-) proportionally [08:18] mvo: did you see intel shares drop 10% [08:18] mvo: when apple announced macs will move to arm in 2020 [08:19] well, not arm specifically but probably [08:19] (some people think it might be riscV as well since it is free) [08:19] and the core design is custom anyway, just instruction set was used [08:21] pstolowski: can we close https://github.com/snapcore/snapd/pull/4965 [08:21] PR #4965: ifacestate: injectTasks helper [08:22] zyga: yes, done [08:23] PR snapd#4965 closed: ifacestate: injectTasks helper [08:23] zyga: pushed an update to #4972 [08:23] PR #4972: cmd/snap-mgmt: remove timers, udev rules, dbus policy files [08:24] siigh [08:24] is it the 90s again [08:24] except there's no NeXT [08:24] mborzecki: thanks, looking [08:25] apple going off and doing its own CPUs, not doing anything really new, fans still being fans [08:25] mborzecki: ack [08:25] thank you :) [08:25] mvo: that's 2.32.3 right? [08:25] mborzecki: did you shellcheck everything new? [08:25] Chipaca: it's the 90s but we have linux [08:26] zyga: i had linux in the 90s [08:26] mborzecki: yes [08:26] but linux now sucks less ;-) [08:26] zyga: yeah, it will complain about -exec and recommend -execdir (but that's expected) [08:26] zyga: i've been on linux since 1994, and i switched because it sucked less [08:27] mborzecki, zyga hm, if shellcheck does not like, whats the advantage over find|xargs? [08:27] mborzecki: interesting, I didn't know about execdir [08:27] can we just use execdir then [08:27] it looks very sane [08:27] force push? [08:27] mvo: xargs has size limits [08:27] (trying to keep the commit count low) [08:27] there's -n [08:27] though [08:27] and you can use + form to make fewer calls to rm even :) [08:28] mborzecki: we can squash it on merge but I'm fine with force push too [08:28] heh, -n, + bikeshedding :P [08:28] yeah, this feels like we are deep in bikesheed land :) [08:29] it's shell ;) [08:29] it's the perfect candidate [08:29] zyga: what's that about size limits? [08:30] PR snapd#4975 opened: osutil: fix fstab parser to allow for # in field values (2.32) [08:31] Chipaca: xargs wasn't smart and could run into issues with the size of cmdline [08:31] that's why I traditionally don't like using it [08:32] mvo: hi! do you know why core18 wasn't pushed to the store, even though it was built successfully for amd64? does it wait for all architectures to build? [08:32] https://www.irccloud.com/pastebin/srWqXthX/ [08:32] Chipaca: ^ [08:33] zyga: Chipaca: iirc xargs will call the command a number of times if it hits a limit, but i'm not sure if that applies to all versions in the wild [08:33] (or anything else than gnu) [08:33] BjornT_: let me check [08:33] zyga: ah [08:34] BjornT_: I manually published it now, but let me try to figure out why it wasn't done so automatically [08:35] mborzecki: that's intrinsic to xargs, yes, but see zyga's pastebin [08:35] mvo: can we override gustavo's -1 since the issue is fixed now [08:35] https://github.com/snapcore/snapd/pull/4930/files [08:35] PR #4930: skip test that requires internet when not present [08:35] Chipaca: ack [08:35] how do I use adt-buildvm-ubuntu-cloud to get a bionic image for spread? [08:35] in xenial [08:36] (this is from man xargs) [08:36] (it looks for a .disk1.img that's not there) [08:36] zyga: looking [08:36] Chipaca: adt-buildvm-ubuntu-cloud -r bionic does not work? [08:37] mvo: thanks! i did confirm that the built core18 snap worked together with maas and snapcraft 2.40, btw [08:37] mvo: it tries “Downloading https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64-disk1.img” and then crashes with a 404 [08:37] 404 -> traceback, because python [08:37] Chipaca: iirc cloudimg' don't have .disk1.img [08:37] but 404 nontheless :-) [08:37] nonetheless* [08:37] Chipaca: oh, indeed, iirc you need to do: "sudo apt install -t xenail-backports apport" [08:37] Chipaca: to get the non-ancient version of apport [08:37] Chipaca: eh, adt [08:37] _apport_? for cloud images? [08:37] Chipaca: silly me [08:37] ah :-) [08:37] Chipaca: autopkgtest [08:38] Chipaca: but let me quickly double check [08:38] Chipaca: yes, please try that [08:38] (the version of *autopkgtest* from xenial-backports) [08:38] E: The value 'xenail-backports' is invalid for APT::Default-Release as such a release is not available in the sources [08:38] Chipaca: without my typo :( [08:38] * Chipaca fixes the typo and tries again [08:38] * Chipaca is copy-pasting commands that start with 'sudo' [08:39] mvo: I _trusted_ you! /o\ [08:39] Chipaca: sorry for letting you down! [08:39] I'm used to it by now :-p [08:39] anyway, that worked, thanks! [08:39] * mvo weeps a bit in the corner [08:39] Chipaca: happy that it worked, this should give you an image [08:40] yep yep, it's doing its thing [08:40] damn, forgot to start the pomodoro timer [08:42] mborzecki: for what specifically? [08:42] mvo: work :P [08:43] mvo: there's a nice gnome shell extension http://gnomepomodoro.org/ [08:43] mborzecki: do you use a physical one or software? if software, which one [08:43] mborzecki: heh - you answered already [08:43] hello [08:44] I think snap or apparmor broke my lxd install. Is this the correct place to look for help? [08:44] hey timp [08:44] yes, what happeend? [08:44] happened [08:44] zyga: thanks :) [08:44] mvo: forces breaks on you, reminds me to do some stretching, squats etc [08:44] yesterday, my lxd worked fine. Today it does not. (I did an apt upgrade yesterday, that may be related. No snap refresh though) [08:44] $ lxc list [08:44] mborzecki: zyga: not sure why I'm looking at this, but freebsd's find also has -execdir (but its xargs doesn't have --show-limits) [08:44] cat: /proc/self/attr/current: Permission denied [08:44] /snap/lxd/6492/commands/lxc: 6: exec: aa-exec: Permission denied [08:44] timp: snaps refresh automatically [08:44] dmesg shows this: [08:44] [ 1140.618728] audit: type=1400 audit(1522829695.691:122): apparmor="DENIED" operation="open" profile="snap.lxd.lxc" name="/proc/8365/attr/current" pid=8365 comm="cat" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000 [08:45] [ 1140.619047] audit: type=1400 audit(1522829695.691:123): apparmor="DENIED" operation="exec" profile="snap.lxd.lxc" name="/usr/sbin/aa-exec" pid=8352 comm="lxc" requested_mask="x" denied_mask="x" fsuid=1000 ouid=0 [08:45] timp: can you check if "snap changes" has anything about lxd? [08:45] yes, it did: [08:45] 93 Done 2018-04-04T07:44:00Z 2018-04-04T07:45:55Z Auto-refresh snaps "lxd", "aws-cli", "core" [08:45] timp: and can you paste "snap interfaces | grep lxd" [08:45] timp: ok [08:45] :lxd-support lxd [08:45] :network lxd,nextcloud-client,simplenote,telegram-sergiusens [08:45] :system-observe lxd [08:45] lxd:lxd [08:46] hmm all looks good here [08:46] let me look deeper [08:47] thanks [08:47] isn't there a discussion about in the forum [08:47] I can 'cat /proc/self/attr/current', so that permission denied is weird. I don't know how AppArmor works though. [08:47] oh, perhaps [08:48] is this relevant: curl https://assertions.staging.ubuntu.com/v1/assertions/account-key?account-id=canonical [08:48] mborzecki: what version of systemd do you have over there? (on arch i mean :-) ) [08:48] sorry [08:48] is this relevant: https://forum.snapcraft.io/t/2-0-lxd-snap-fails-on-sytems-with-partial-apparmor-support/4707 [08:48] Chipaca: 238 [08:48] ahh [08:48] timp: can you paste "snap version" [08:49] snap 2.32.1 [08:49] snapd 2.32.1 [08:49] series 16 [08:49] ubuntu 16.04 [08:49] kernel 4.13.0-37-generic [08:49] I'm on xenial [08:50] timp: thank you [08:50] so this is not a partial apparmor support issue [08:50] ok, I'll look at what I was meant to before [08:50] what do you mean with partial apparmor support issue? [08:50] zyga: mvo: force pushed to 4972 [08:50] timp: debian and ubuntu have different set of supported apparmor features because ubuntu still carries a patch with non-upstreamed extensions [08:51] timp: can you please pastebin /var/lib/snapd/apparmor/profiles/snap.lxd.lxc [08:52] * zyga -> brb [08:53] zyga: sure, http://paste.ubuntu.com/p/hrrQFkXYSx/ [08:54] https://forum.snapcraft.io/t/snapped-lxd-has-stopped-working-aa-exec-doesnt-exist-in-the-snap/2356 seems similar, but there's no solution there [08:55] hmm, this is unfortunate. I was doing all my work in containers so now I'm blocked. [08:57] so, [08:57] $ snap revert lxd [08:57] lxd reverted to 3.0.0 [08:57] tim@tim-XPS-13-9350:~$ lxc list [08:57] that fixes it for now. [08:57] looks like the lxd update broke stuff [08:57] timp, you might want to ask on #lxc-dev as well [08:58] re [08:58] timp: looking [08:59] timp: it looks like the profile is wrong [08:59] it doesn't contain contributions from lxd-support [09:00] hmm, 'snap refresh' tells me that there are no updates for lxd, even though I just did snap revert lxd [09:00] can you snap disconnect lxd:lxd-support [09:00] and then re-connect it [09:00] and see if the profile is different [09:00] (copy the current profile out) [09:01] note that after 'snap revert lxd', it is working again. And I cannot reproduce the problem any more. [09:01] right because you are on a different revision [09:01] you can look at the same file again [09:01] and the header will say @{SNAP_REVISION} = ... [09:01] snap.lxd.lxc? [09:01] yes [09:02] in the pastebin it was 6492 [09:02] this is with the working version: http://paste.ubuntu.com/p/vgJcv7dPRm/ [09:02] hmm, also 6492? [09:03] right, this one is correct [09:03] the previous one looked like basic, unconnected (no interfaces) application [09:03] ah, so what I did is 'snap revert lxd'. And then 'snap refresh lxd'. Now I again have 6492, and it still works. [09:03] pstolowski: ^ maybe some other bug? [09:04] it looks like a bug in snapd, more than a bug in lxd IMO [09:04] hmm hmm [09:04] can you add this to a forum thread or a bug report [09:04] I don't want to lose it here [09:04] on LP? [09:04] yes, on snapd [09:04] ok [09:04] this a much more tested path than the snap install lxd (without anything) [09:05] though [09:06] timp: please include: snap version; snap changes; snap interfaces (from what you did, not from what is is now when it works) [09:06] and the two profiles you made [09:06] (the pastebins) [09:06] I think it shows that we didn't really include the lxd-support snippets at all [09:07] pstolowski: the bug on the forum feels related now [09:07] the reporter there doens't have lxd-support [09:07] mvo: didn't you publish the latest one? the one you published is still on glibc 2.26. [09:07] mvo: this is the one i used when i confirmed it worked with maas: https://code.launchpad.net/~mvo/+snap/core18/+build/182036/+files/core18_very-unstable_amd64.snap [09:08] * zyga afk again [09:11] zyga, pedronis: thanks for the help. I reported the bug here: https://bugs.launchpad.net/snapd/+bug/1761115 [09:11] Bug #1761115: After lxd snap upgrade, it lxc stopped working [09:11] timp: thank you [09:11] BjornT_: indeed, it was an older one [09:13] BjornT_: pushed a new one to edge, [09:14] BjornT_: and I think wgrant helped me fix the auto-upload issue, so hopefully this will be fixed now (testing this right now) [09:15] mvo, hi, do you know why I'm getting this error when trying to switch to the channel? https://pastebin.canonical.com/p/KywXTYgC6y/ [09:16] mvo: nice, thanks [09:20] ackk: this looks like snapd crashed or something, what do you see in "journalctl -u snapd"? anything that indicates a panic? [09:21] mvo, no panics, but repeated errors like: [09:21] Jan 08 15:11:47 maas systemd[1]: snapd.service: Failed to set invocation ID on control group /system.slice/snapd.service, ignoring: Operation not permitted [09:21] Jan 08 15:11:47 maas systemd[1]: snapd.service: Failed to reset devices.list: Operation not permitted [09:21] mvo, fwiw I can install/remove snaps [09:21] it's just this operation that seems to fail [09:21] mvo, removing the base and reinstalling from the store works [09:22] ackk: ok, I will try to reproduce this, the EOF looks suspicious [09:23] mvo, btw it seems snapd doesn't check if you have snaps that depend on a base you remove? [09:23] I just removed core18 and maas which depends on it was installed [09:24] ackk: mvo: yeah that EOF usually means snapd went away mid-request (typically because of a crash) [09:24] Chipaca, but snapd seems to work fine for everythingelse [09:24] ackk: systemd'll restart it unless it does it too often [09:25] and yes, snapd does not do proper dependency management (yet) [09:25] Chipaca, weird, I thought I saw an error when trying to remove a snap before [09:25] ackk: it'll block core, gadget and kernel removes [09:26] oh I see [09:26] we don't have that check for bases in use [09:26] basically things you can't come back from :) [09:26] pedronis: nor for default providers [09:26] afaik [09:26] well default providers are a bit of different issue [09:26] we probably should at least warn / prompt [09:26] they are optional [09:27] pedronis: about as optional as bases [09:27] not that we know what thât means [09:27] Chipaca: well, [09:27] pedronis: try running gnome-calculator without gnome-16.04-whatevs [09:27] it dpends [09:27] Chipaca: they are installed as if they were optional [09:27] yes [09:27] and we don't have, afaik, a way of expressing optionalness [09:27] optionality [09:28] optionionness [09:28] which mostly means we don't make a fuss [09:28] if they don't install [09:28] the snap uses them might [09:29] zyga: ack. i'm working on auto-connect PR atm, I can look at this in a bit [09:29] ok [09:30] pstolowski: are you looking into the spread test for that? afaict is likely a very untested area atm [09:32] pstolowski, mvo: https://forum.snapcraft.io/t/auto-connected-interfaces-missing-on-initial-snap-install/4850/16 [09:32] it's not just that we didn't connect [09:32] we didn't even _load_ interfaces somehow [09:32] this is very very nasty [09:32] pedronis: not sure how to spread-test this particular case yet (other than by removing autoconnect task with jq, but that's not a good test). for regular auto-connect with current snapd we do have spread test(s) [09:33] pstolowski: we need to start from the deb in the archive [09:34] pedronis: we can, but that's a moving piece no? [09:34] pstolowski: it is [09:34] but not different than what we do in the current upgrade test [09:35] let me see [09:35] as I wrote probably a bit annoying because it will need the fakestore [09:35] because we hardcode how we get core [09:42] * Chipaca -> break [09:43] pedronis: i think in the existing tests we install whatever version of snapd we have in distro (or whatever we build locally); now we would need an old version that doesn't have the feature [09:44] zyga: yes, this looks rather bad.. [09:45] pstolowski: xenial has 2.29 [09:46] pedronis: ah, indeed, and it will never get updated, you're right [09:46] well, it will [09:47] hmm [09:47] but is good enough for testing the current problem [09:47] as it's happening [09:48] ok, fair enough [09:48] BjornT_, ackk, wgrant auto-upload of core18 works now, thanks for reporting and helping with the fix [09:48] would be great to have a test that's stays valid for longer time [09:52] mvo, great, thank you! [09:57] Chipaca: https://github.com/snapcore/snapd/pull/4973 needs a 2nd review for the release [09:57] PR #4973: osutil: fix fstab parser to allow for # in field values [09:57] Chipaca: fstab, wanna take it? [09:58] pstolowski: well, we have unit tests too [09:59] mvo: can I remove the snappy.upstream part when we merge the other script that makes the input tarball [09:59] that branch is green and it doesn't hurt much [10:00] pstolowski: also hopefully at some point soon we will get epochs and testing jumping so much between revision will be a bit less needed [10:00] zyga: ok, fwiw, I am working on git-buildpackage right onw [10:01] Thanks! [10:01] PR snapd#4938 closed: release-tools: add repack-debian-tarball.sh [10:02] PR snapd#4972 closed: cmd/snap-mgmt: remove timers, udev rules, dbus policy files [10:02] mborzecki: can you please provide a backport for this branch for 2.32 ^ [10:04] zyga: looking [10:08] PR snapd#4976 opened: cmd/snap-mgmt: remove timers, udev rules, dbus policy files (2.32) [10:08] mborzecki: wrong target branch [10:09] zyga: should be fixed now [10:16] PR snapd#4977 opened: debian: add gbp.conf script to build snapd via `gbp buildpackage` === Beret- is now known as Beret [10:18] mvo: what's the status of your review of #4900, got half through it yesterday and then new emergencies today? [10:18] PR #4900: many: use the new install/refresh API by switching snapstate to use store.SnapAction [10:24] pedronis: yes, continuing now [10:29] is the store having a slow ceph day again? === oSoMoN_ is now known as oSoMoN [10:42] not that I know of, a little bit of load with the release yesterday [10:48] mvo: can I merge the backport https://github.com/snapcore/snapd/pull/4975 [10:48] PR #4975: osutil: fix fstab parser to allow for # in field values (2.32) [10:48] PR snapd#4973 closed: osutil: fix fstab parser to allow for # in field values [10:48] the master branch had 2 +1s and I just merged it [10:48] pedronis: can you please do a 2nd review on release-critical https://github.com/snapcore/snapd/pull/4969 fix from mvo [10:48] PR #4969: interfaces: make system-key more robust against invalid fstab entries [10:50] jdstrand: hey, can you please enqueue https://github.com/snapcore/snapd/pull/4545 for re-review [10:50] PR #4545: interfaces/x11: allow X11 slot implementations [10:52] * zyga needs to plan an errand today :/ [10:52] are we doing the standup earlier? (now?) [10:52] mvo: when is the standup today [10:53] 2pm we agreed [10:53] ah, that's good then [10:53] I can do the errand after that [10:53] thanks [11:24] mvo, hey [11:25] hey cachio [11:25] what time are we making the standup today? [11:25] cachio: in 35min [11:25] cachio: hope that works for you [11:25] cachio: if not its ok to skip [11:26] mvo it doesn't work today [11:26] mvo, I have to go to my son's school [11:27] are we making all wednesday at this time? [11:27] every wednesday? [11:27] zyga: mvo: pushed updates to #4942 [11:27] PR #4942: cmd/snap: user session application autostart v3 [11:27] thanks [11:27] mborzecki: ta [11:27] standup is @ 2pm? [11:28] cachio: no problem, we need to discuss tomorrow/next week what time to pick then [11:28] yes [11:28] ack [11:28] cachio: we also need a time that works for gustavo [11:28] cachio: and for you obviously [11:29] mvo, it works for me but not today [11:29] but I'll block the calendar [11:30] PR snapd#4930 closed: skip test that requires internet when not present [11:30] cachio: ok, thank you [11:31] mvo, is it gonna by just on wednesdays? or every day? [11:32] cachio: only (some) wednesdays [11:32] mvo, [11:32] ok [11:33] mvo, when is comming the new sru? [11:39] pstolowski: https://github.com/snapcore/snapd/pull/4968 is broken on tests FYI [11:39] PR #4968: RFC: ifacemgr: remove stale connections on startup [11:40] PR snapd#4975 closed: osutil: fix fstab parser to allow for # in field values (2.32) [11:40] zyga: yep i know, i had to park this for a while due to autoconnect [11:40] ack [11:40] thanks [11:41] zyga: i have fixes and will also add new test [11:45] zyga (cc timp): that was the exact issue I saw yesterday. apparmor denial on attr/current and aa-exec. no interfaces listed for lxd with snap interfaces. I had to stop/start snapd to get them back then do 'sudo apparmor_parser -r /var/lib/snapd/apparmor/profiles/snap.lxd.* [11:45] jdstrand: thank you for confirming that, I wonder what could be the cause [11:45] jdstrand: wich core channel were you on then? [11:45] I was suspecting some window when snapd restarts while there's an unmounted lxd snap [11:46] jdstrand: it feels like the place when we read snap.Info is flaky in that the error is non-fatal [11:46] and we carry on knowing *nothing* about that snap [11:51] zyga (cc timp): oh I forgot. I did: sudo systemctl stop snapd ; sudo systemctl start snapd # now interfaces show up) ; sudo snap disconnect lxd:lxd-support ; sudo snap connect lxd:lxd-support # now the interfacec works [11:51] * cachio afk [11:51] yeah, that's that [11:52] ie, I tried apparmor_parser -r /var/lib/snapd/apparmor/profiles/snap.lxd.* before the disconnect/connect and realized it didn't have the policy in place [11:52] zyga: I'm on beta still [11:52] 16-2.32.2 [11:53] yep [11:53] that all hints to missing snap info [11:53] and missing interfaces [11:55] zyga: it may not be related, but lxd-support is one of the few interfaces with an installation constraint [11:57] oh, what is that constraint? [11:57] are you referring to the policy? [11:58] zyga: base declaration [11:59] zyga: allow-installation: false [11:59] right [12:00] zyga: so a snap from the store must have a snap declaration to install the snap (--dangerous installs do not) [12:00] I dono't know if it is related [12:00] don't [12:00] jdstrand: can you look at journalctl snapd.service [12:00] there's a log if we drop an interface [12:00] if it was that we'd have a trace [12:00] but it is the only snap I know that has this [12:01] that'd be a nice hit to what the issue is [12:02] it may be that we did read it but then skipped it [12:05] zyga: I experienced the problem yeserday. That put's us at something happening between 18:00 on Apr 2 through 10:25 Apr 3. I've pasted since Mar27 snapd start up through today here: https://paste.ubuntu.com/p/ySSrxXRgyK/ [12:05] puts* [12:05] thank you [12:06] I'll review it after standup [12:06] zyga: there is some weird stuff in there [12:06] woah [12:06] Apr 03 10:25:29 localhost snapd[27535]: 2018/04/03 10:25:29.156759 helpers.go:214: cannot connect plug "lxd-support" from snap "lxd", no such plug [12:06] Mar 27 12:44:27 localhost snapd[2848]: 2018/03/27 12:44:27.304321 stateengine.go:101: state ensure error: cannot refresh snap-declaration for "lxd": Get https://api.snapcraft.io/api/v1/snaps/assertions/snap-declaration/16/J60k4JY0HppjwOjW8dZdYc8obXKxujRu?max-format=2: dial tcp: lookup api.snapcraft.io on 127.0.0.53:53: server misbehaving [12:06] pstolowski: ^ we cannot merge your fix for stale connections before we get to the bottom of this [12:06] or we'll mess up real state [12:06] thank you [12:06] zyga: note that the snap declaration wasn't pulled down days before [12:07] I don't know if later fetches worked out ok [12:08] at 10:25 on Apr 3, that is when I stopped/started lxd manually [12:08] actually, maybe that was 11:37 [12:09] * jdstrand checks irc backscroll [12:09] ah, it was 11:37 [12:10] the thing at 10:25 was not me. 2708 Done 2018-04-03T15:24:34Z 2018-04-03T15:26:04Z Auto-refresh snaps "chromium", "lxd", "core" [12:11] that is in UTC and corresponds to the 10:25 restart [12:15] jdstrand: did you see any snaps listed as broken at that time? [12:16] jdstrand: we are discussing this issue now and we understand how it happens mechanically (we should be albe to reproduce that) but we have no idea (yet) why it happens [12:19] zyga: no [12:20] https://forum.snapcraft.io/t/snapped-lxd-has-stopped-working-aa-exec-doesnt-exist-in-the-snap/2356/27 [12:20] I lined up all the times [12:20] it looks like at 10:24 *both* core and lxd were updated [12:20] refreshed* [12:20] yes, its' something we suspect [12:21] then at 11:33 I noticed things went awry [12:21] if snapd starts when snap (or snaps) are unmounted we would hit this [12:21] it was just lxd for you, right? [12:21] and 11:37 I stopped/started manually and 11:39 diconnected/connected [12:21] (only a subset of the refreshed snaps could be affected) [12:21] zyga: it was what I noticed. it seems chromium could've been affected (it was refreshed at the same time) [12:22] but wasn't? [12:22] I don't use it regularly [12:22] * jdstrand uses firefox [12:22] so I don't know [12:22] we need to look at ordering and at things like ensuring that when systemd says "it's mounted" it really really is [12:22] thank you [12:22] I can say the firefox snap wasn't misbehaving [12:26] I have a core system which booted yesterday but today hangs [12:27] https://usercontent.irccloud-cdn.com/file/d1gWsl07/IMG_20180404_132619.jpg [12:27] Where should I file this? (can I revert back to previous core [given I can't boot it]) ? [12:32] * kalikiana lunch [12:33] mvo: ^ do you know where I should file this? [12:33] zyga: fyi, I added a few more comments [12:35] zyga: snap switch && snap refresh ? [12:35] popey: what happens if you power-cycle it? it always hangs at this point? [12:35] jdstrand: we have a way to loop through this code to hit the issue ifff it is a timing bug [12:35] mvo: hangs at the same point [12:35] mborzecki: yes, on a pair of core + lxd [12:35] popey: the forum is a good place for a report - also here, because OMG and we want to know about it [12:35] popey: what is strange is that there is only kernel message, nothing from userspace AFAICT [12:35] popey: which indicates a kernel problem, can you get to the grub prompt? [12:36] popey: also even if its a kernel problem the rollback magic should have saved us/you :( [12:36] popey: so … [12:36] oh man, 4th boot now it's moving [12:36] popey: *cough* [12:36] popey: magic! :) :/ :( [12:36] :S [12:36] logger.Noticef("cannot retrieve info for snap %q: %s", snapName, err) [12:36] jdstrand: can you grep in your full journal for any hint of that [12:36] popey: moving in what direction? successful boot? or just hangs differently? [12:36] its successfully booted now [12:37] [12:37] mvo: if we had support for hardware watchdog we would be able to recover such issues automatically (what popey ran into) [12:37] it totally hung 3 times, but this time it went through, no idea why. [12:38] jdstrand: I also totally missed this [12:38] Mar 27 12:44:27 localhost snapd[2848]: 2018/03/27 12:44:27.303347 autorefresh.go:327: Cannot prepare auto-refresh change: cannot refresh snap-declaration for "lxd": Get https://api.snapcraft.io/api/v1/snaps/assertions/snap-declaration/16/J60k4JY0HppjwOjW8dZdYc8obXKxujRu?max-format=2: dial tcp: lookup api.snapcraft.io on 127.0.0.53:53: server misbehaving [12:38] if there's no assertion, we should not even attempt to continue refreshing [12:38] zyga: ohhhh [12:38] zyga: that is an interessting hint [12:38] Apr 03 10:25:29 localhost snapd[27535]: 2018/04/03 10:25:29.156224 helpers.go:214: cannot connect plug "gsettings" from snap "chromium", no such plug [12:39] but it seems the issue did affect both chromium and lxd [12:39] so unclear if this is just the assertion refresh [12:39] Apr 03 10:25:29 localhost snapd[27535]: 2018/04/03 10:25:29.156232 helpers.go:214: cannot connect plug "home" from snap "chromium", no such plug [12:39] for instance home is not gated [12:39] so not having an assertion would not change anything [12:42] I will try to reproduce this manually now [12:43] mvo: I just confirmed that logger.Noticef does *not* work [12:43] probably because at that stage, logger is just not setup yet [12:43] Chipaca: ^ [12:44] zyga: ? [12:44] mvo: I stopped snapd, stopped a mount unit of a random sanp, started snapd [12:44] what was logged was [12:44] kwi 04 14:43:28 fyke snapd[32408]: 2018/04/04 14:43:28.481566 helpers.go:106: invalid snap version: cannot be empty [12:44] zyga: logger is set up in snapd/main.go's init() [12:45] ha [12:45] zyga: so I call shenanigans :-) [12:45] in fact it's the first line of that init :-) [12:45] of course, that init runs after all other inits, because of the deps tree [12:45] but that's still pretty honking early :-) [12:45] grepping for that message on my system I see a case of snapd starting and not having snaps mounted [12:46] Chipaca: I will patch it to just print [12:46] but I suspect something is wrong there [12:47] jdstrand: when you get a mo. can I have an exception for https://dashboard.snapcraft.io/snaps/emoj/revisions/19/ ? [12:47] (it doesn't need a desktop file, but needs x11 interface) [12:53] mvo: I applied feedback or answered in 4900 [12:54] ah [12:54] mvo: we say "cannot read snap %q: ..." and assign that to info.Broken [12:54] pedronis: thanks, I'm on the remaining bits now [12:54] we don't return an error [13:04] slightly funny to see this: https://github.com/snapcore/snapd/pull/4900#discussion_r179108664 [13:04] PR #4900: many: use the new install/refresh API by switching snapstate to use store.SnapAction [13:06] Apr 04 13:04:33 ubuntu snapd[1037]: 2018/04/04 13:04:33.934552 store.go:1459: DEBUG: Hashsum error on download: sha3-384 mismatch for "core": got 86d2a911e843c48c8f49c459b5b7259a5ca786ad4e05b1f98a353082aee6dc79c0cc08ca9c09a3c603675a3acead29d9 but expected 7b354270492a85a54b44ad78c000e6a61ca38a49d5ab57d79a2e9d5eca20db9af89a1186b7b14d78ae232ec2f4824372 [13:06] that's unexpected [13:06] zyga: maybe it should be moved lower? I'm open to input, in a meeting now though [13:06] pedronis: I was just adding similar logging [13:08] off to pick up the kids [13:09] Issue snapcraft#2048 opened: Thank you [13:11] pedronis, mvo : does this sound sensible/doable for autoconnect spread test: 1.snapd installed from distro deb; 2.snap download core; 3.unsquashfs core, inject new snapd, squash back; 4.snap download lxd; 5. point fake store at both modified core & lxd (plus do assertion magic which i'm not yet clear about); 6. snap install lxd? [13:14] pstolowski: in a meeting right now (and so is pedronis) sounds sane [13:14] PR snapd#4978 opened: overlord,interfaces: be more vocal about broken snaps and read errors [13:15] mvo: extra logging I'd like to add to this release: https://github.com/snapcore/snapd/pull/4978 [13:15] PR #4978: overlord,interfaces: be more vocal about broken snaps and read errors [13:15] also refuse adding broken snaps to repo [13:16] pstolowski: ^ [13:16] I think you are the only one to review now :) [13:16] pstolowski: we already build that core that way at some point [13:18] pedronis: i know, I just need to prevent it from getting mounted immediately (it's update_core_snap_for_classic_reexec) [13:19] * zyga -> walk [13:23] zyga: looks sane, quick questions: why "[stderr]"? where does Stderr go? just journal? [13:24] re [13:24] Yes [13:24] To error part of it [13:29] The [stderr] is to see if we are doing something wrong with o logger [13:30] Bug #1761187 opened: ~/.snap/ not available for root on some systems [13:39] zyga: journalctl|grep 'cannot retrieve info for snap' returns nothing [13:39] Ack [13:40] I will setup a loop [13:40] Just afk now [13:42] Bug #1761187 changed: ~/.snap/ not available for root on some systems [13:43] mup: you just said that [13:43] Chipaca: I apologize. I'm a program with a limited vocabulary. [13:47] lol [13:48] lol [13:49] mup: Chipaca is making fun of you! [13:49] pstolowski: I really wish I understood what you're trying to do. [13:49] :) [13:50] yeah me too [14:08] Chipaca: can you look #4978 [14:08] PR #4978: overlord,interfaces: be more vocal about broken snaps and read errors [14:09] zyga: I think that code is logging too much, CurrentInfo will call readInfoAnyway [14:12] I can trim some, yeah [14:15] mborzecki: 4976 fails, I guess you know about this already(?) [14:17] zyga: any new data on the issue? [14:18] mvo: re, I was afk (kids/lunch/dog) [14:18] mvo: I did some digging and it seems we don't even return an error [14:18] mvo: we return snap info with Broken field set [14:18] mvo: I provided a patch that adds verbosity and barrages broken snaps from entering the repository [14:19] zyga: ok, I see the PR [14:19] zyga: thank you! [14:19] mvo: I haven't setup a loop that would reproduce the refresh cycle issue yet [14:19] pstolowski: I think we can start with lxd, lxd is a bit strange in the sense that is both our typical case, but a bit fragile on its own [14:19] mvo: I will setup the loop soon, if I hit a case when it reproduces we will have some confirmation [14:19] zyga: ta [14:19] mvo: while that runs I will review tasks we do on refresh to look for issues [14:20] zyga: why the logger.Noticef() and the fmt.Fprintf() ? is the notice not sufficient? [14:20] pedronis: ok, that's what i'm basing my spread test on [14:20] mvo: maybe, I think so but wanted to double check [14:20] mvo: I will drop the fmt.Fprintf calls if logger works [14:20] better safe than sorry [14:27] zyga: we don't read snaps that early, if there's some logging there should be the rest [14:27] popey: ok [14:28] jdstrand: thanks [14:36] grah, grah, grah, everything is terrible [14:36] pedronis: looking [14:37] Chipaca: also 4969 if you have spare cycles for reviews [14:38] mvo: I do [14:39] for #1761193 I'm tempted to use 'cat' to read the auth.json file... :-/ [14:39] Bug #1761193: ~/.snap/ not available for root on some systems === Eleventh_Doctor is now known as Pharaoh_Atem [14:39] as in ‘if running under sudo, do 'su -c "cat the/file" $SUDO_USER'’ [14:40] the alternative is to leave goroutines stuck, which might not be so terrible [14:40] as this is in client, they're short-lived [14:44] FYI this test always fails on my laptop [14:44] ... value *errors.errorString = &errors.errorString{s:"cannot obtain bus name 'io.snapcraft.Launcher'"} ("cannot obtain bus name 'io.snapcraft.Launcher'") [14:44] FAIL: cmd_userd_test.go:62: userdSuite.TestUserd [14:45] pedronis, mvo: updated https://github.com/snapcore/snapd/pull/4978/files [14:45] PR #4978: overlord,interfaces: be more vocal about broken snaps and read errors [14:46] (force pushed for cherry pick) [14:46] dropped fprintfs [14:46] and one log [14:46] ta! [14:47] at the very least we will (maybe) get better logs next time [14:48] mvo: is .3 scheduled for today? [14:49] zyga: depends on when we get the fixes, but yes, would love to do it today [14:49] zyga: the "exec: aa-exec: Permission denied" is also a bit alarming [14:51] kyrofa: hey! I hope you are well :) I'm really puzzled about bug #1761127 and can't even find a workaround [14:51] Bug #1761127: On Travis (not a real vte), releasing to a branch name during snapcraft push prints a stacktrace [14:52] is the libGL problem on Bionic (possibly related to nVidia binary drivers) with snaps being worked upon? : libGL error: No matching fbConfigs or visuals found [14:53] I think it's because of the move to a unified loader? *looks for the right deb* .. `libglvnd0` [14:54] mvo: that is because aa-exec is not allowed by default template [14:55] mvo: when lxd does't have the lxd-support interface at all this would happen [14:55] mvo, zyga the spread test is painful, i'm learning through experimentation and trial-and-error [14:55] diddledan: can you try beta? it should be fixed there [14:55] pstolowski: can I help somehow? [14:55] mvo: the fedora issue should be gone now [14:55] zyga: right, I can reproduce this here on 17.10 by purging snapd and then installing it [14:56] zyga: and then snap install lxd [14:56] mvo: oh? can you be more specifc [14:56] ah [14:56] pstolowski: yeah, its hard [14:56] but isn't that the auto-connect issue [14:56] or are you saying that you can reproduce by pruging, installing lxd and then see lxd without any interface at all [14:56] (as said by snap interfaces) [14:56] pstolowski: we have other tests using the fakestore but probably not exactly how you need it [14:57] zyga: ok, the beta fixes a game I've packaged (used for quick test) but the solus-runtime-based steam still fails [14:57] * diddledan wonders where ikey got to [14:57] pedronis: yes, and there are differences, so not easy to tell what's crucial and what not; if you can spot what might be missing still https://pastebin.ubuntu.com/p/6Jd6rRZJGk/ then this could save me some iterations [14:57] diddledan: that is probably something for ikey and jdstrand [14:58] zyga: I did "apt purge snapd; apt install snapd/artful-updates; snap install lxd; lxc": /snap/lxd/6492/commands/lxc: 6: exec: aa-exec: Permission denied [14:58] zyga: lxd-support is not connected [14:58] mvo: does it exist? [14:58] pstolowski: mvo: I fear we have a big issue [14:58] with the tests [14:58] zyga: it does [14:58] if it exists, that's the bug that is well understood now that I believe pawel is fixing [14:58] ok [14:58] pedronis: what exactly? [14:58] * diddledan pokenprods jdstrand :-p [14:58] in our case the bug is that the plug is gone, as evidenced on the forum... [14:58] zyga: ok, if its the same bug its all good [14:58] pstolowski: mvo: we want to use the deb but also the fakestore, but by definition that's not possible [14:59] PR snapd#4979 opened: overlord,interfaces: be more vocal about broken snaps and read errors (2.32) [14:59] pedronis: uhhh, indeed [14:59] pedronis: oh why [14:59] pstolowski: mvo: the official deb will never trust the fakestore [14:59] (that's a feature) [14:59] but is annoying here [14:59] mvo: I sent a backport of the logging PR === phoenix_firebrd is now known as phoenix_firebrd_ [15:00] and will try to reproduce this in a loop [15:00] pedronis: yeah, oh well [15:00] zyga: ta! [15:00] pstolowski: can you push what you have? without the spread test? [15:01] pstolowski, pedronis I think we understand the issue, so getting this into edge might be good as a first pass. [15:02] mvo: pstolowski: at most we can write a daily test that uses the the deb and the snap in edge [15:02] there's still value to that [15:03] but cannot use what's on masrer [15:03] mvo: i can, let me just finish unit test [15:03] pstolowski: sure [15:04] PR snapd#4974 closed: ifacestate: injectTasks helper [15:04] pstolowski: once that has landed we can test locally by modifying release/2.29 to install the edge core by default and running it locally. then we do "snap install lxd" (with no core yet) and we get the (fixed) core from edge and the re-exec should work [15:04] pstolowski: its not great but as pedronis said we have no way currently to test this for real [15:04] Hey there didrocks! I'll take a look this morning, see if we can figure it out [15:05] mvo: sounds good [15:06] kyrofa: thanks! As it only happens on Travis, it's a little bit painful it seems. FTR, I tried with snapcraft release as well and as a simple edge/test in https://travis-ci.org/ubuntu/communitheme-snap-helpers/builds/362179292, but same issue. Same command with local "docker run" works perfectly [15:06] pstolowski: sorry for making you chase impossible tests [15:06] (but basically, our Travis instruction don't work on non branch releases) [15:07] mvo: to write a daily we would need at least to have an envvar to change which channel we get core if it's not there [15:07] pedronis: np [15:07] didrocks, looks like you're successfully pushing, but then the store is giving us some error that we don't know how to parse [15:08] didrocks, how did you install snapcraft? [15:08] PR snapd#4980 opened: Revert "spread.yaml: switch Fedora 27 tests to manual (#4962)" [15:08] Oh, the docker image [15:08] kyrofa: docker run snapcore/snapcraft [15:08] yep [15:08] kyrofa: so, push is working, but release to a branchname don't [15:09] release to "edge" works [15:09] "edge/test" doesn't [15:09] didrocks, but it works locally? [15:09] How weird [15:09] (but works locally, with a docker image) [15:09] yeah :/ [15:09] I think it's a vte vs non real vte thing… [15:09] didrocks, how are you authenticating? [15:09] kyrofa: encrypted file creds, decrypted on build [15:10] as told, replacing "edge/test" by "edge" only works, so, cred issues are out [15:10] didrocks, i.e. using export-login? [15:10] Or your own real creds? [15:10] snapcraft enable-ci travis created an encrypted file and pushed env var to travis [15:10] didrocks, ah, then it is a creds issue [15:10] oh? [15:11] why edge would work when edge/test doesn't? [15:11] didrocks, enable-ci travis creates credentials that can only push to edge [15:11] ahhhhhh [15:11] didrocks, you want export-login [15:11] Caelum: the instructions are merged now [15:11] kyrofa: let me look on instructions for this [15:11] didrocks, use snapcraft export-login [15:11] didrocks, you can tune your creds how you like [15:11] kyrofa: I think there is still the "it should tell you without printing a stacktrace" bug ;) [15:12] didrocks, oh totally, that we can fix [15:12] didrocks, snapcraft doesn't know what the creds can and can't do, so we rely on the store to tell us. But we get all sorts of different error formats from them :P [15:12] fun ;) [15:13] kyrofa: ok, I think I need to export that in a file, then, encrypt it before pushing [15:13] shouldn't --channels=edge including --channels=edge/* ? [15:13] include* [15:13] didrocks, to the length of my knowledge, the store ACLs don't support wildcards [15:13] cprov may know better [15:14] so, basically, I need to give unrestricted access with my creds [15:14] to create branch-on-the-go [15:14] didrocks, you can at least limit it to the specific snap [15:14] (basically, based on the PR name) [15:14] yeah [15:14] didrocks, also limit it only to uploads [15:14] is the list of acls somewhere? [15:15] (the help only mention acls option) [15:15] didrocks, https://dashboard.snapcraft.io/docs/api/macaroon.html#post--dev-api-acl- [15:15] awesome! [15:15] I've been meaning to get those into snapcraft itself [15:15] cprov very helpfully documented them fo rus [15:15] kyrofa: thanks a lot, I would never have figured out this restriction with this stacktrace! ;) [15:15] at least, I'm unblocked now [15:16] didrocks, my pleasure! Should be easy to reproduce, I'll triage the bug and we'll get it fixed, thanks for the thorough report as always :) [15:16] kyrofa: yw :-) [15:19] didrocks, by the way, are you building snaps per PR or something? [15:20] How do you plan on dealing with forks? [15:22] didrocks, by the way, if you don't want to encrypt, you can `travis env set SNAP_TOKEN "$(cat my-exported-login)"` and then use `echo "$SNAP_TOKEN" | snapcraft login --with -` [15:23] kyrofa: yes, that's my plan. Basically, only PR made by core contributors will be built/testable this way [15:23] Gotcha [15:23] another bug getting in the way https://www.irccloud.com/pastebin/Q6aoFA0A/ [15:23] I'm still ensure the build script can at least build on push for people to test [15:23] kyrofa: excellent! I'll use this :) [15:23] bug.sh https://www.irccloud.com/pastebin/bzEWq7pw/ [15:24] not a happy day [15:25] didrocks, you might also like this: https://github.com/travis-ci/dpl/pull/800 [15:25] PR travis-ci/dpl#800: Add snap provider [15:25] zyga: uhhh [15:25] I think we must add some code that waits for core restart with rest of setup [15:26] kyrofa: oh, sounds that will enable me to reduce a lot what I'm doing right now [15:26] pedronis: ^ not sure what you would suggest for this, I'm looking at the snapd restart code now [15:27] mvo: FYI, I ran bug.sh exactly once :/ === phoenix_firebrd_ is now known as phoenix_firebrd [15:34] mvo: holly cow [15:34] reproduced the bigger bug [15:34] 2nd run :| [15:34] I have lxd without lxd-support now [15:35] lxd-support bug trivially reproduced https://www.irccloud.com/pastebin/rootNBq3/ [15:36] details of the change refreshing core and lxd together [15:36] https://www.irccloud.com/pastebin/AxjIVck6/ [15:37] note that spawn and ready times are more interesting than task order === phoenix_firebrd is now known as phoenix_firebrd_ [15:39] zyga: there's no easy way to achieve that [15:41] zyga: also there are no errors there? [15:42] how does we get no interfaces [15:42] and no errors [15:42] no, no errors [15:42] because the snap.Info is returned [15:42] just has Broken field set [15:42] :/ [15:42] but that doesn't relate to core [15:42] we never return error if meta/snap.yaml cannot be found [15:42] pedronis: perhaps, I don't know yet [15:42] I ran it together to force a restart [15:43] this was done on my bionic laptop [15:44] zyga: yeah, I also have the lxd without lxd-support issue, its trivial to reproduce for me as well. but isn't that what pstolowski is fixing right now? [15:45] mvo: no [15:45] mvo: this is from stable core [15:45] mvo: and again, _threre is no plug_ [15:45] this is not a connect issue [15:45] this issue is that we have nothing to connect to [15:46] do we think mount has happened but not? [15:46] like we don't wait on mount enough [15:47] zyga: is https://www.irccloud.com/pastebin/raw/rootNBq3 the pastebin? if so, there is ":lxd-support -" in the snap intefaces? or am I confused? [15:47] https://www.irccloud.com/pastebin/F2qfHSP1/ [15:47] this is key [15:47] there's no plug at all [15:47] there must be a plug on the lxd snap [15:48] the slot you pasted is the implicit slot on core [15:48] which is there even if we cannot read core's snap.yaml [15:48] zyga: and you think this happens because we're not waiting for core install? [15:48] no, I don't know why yet [15:48] what's the task task that does mount again [15:49] zyga: aha, thanks! that was what I was missing [15:49] more clear evidence of this bug (shorter) https://www.irccloud.com/pastebin/tmigy3Hi/ [15:49] pedronis: it is mount-snap [15:49] zyga: there's no Mount at all in your changes [15:49] afaict [15:50] whaaaa [15:50] is it another bug where old core vs new core taks are different? [15:51] we didn't change it in a while [15:51] ??? https://www.irccloud.com/pastebin/XaWi8TtG/ [15:51] but maybe we aren't setting up tasks correctly anymore [15:51] why is that conditional? [15:51] that I don't know [15:51] but in you case the condition should be false [15:51] or so I hope [15:52] but I don't see Mount in your pastebin [15:52] // check if we already have the revision locally (alters tasks) [15:52] revisionIsLocal := snapst.LastIndex(targetRevision) >= 0 [15:52] zyga: 2016 - thats a long time ago [15:52] "locally" is confusing here [15:52] it looks like switch specific artefact [15:52] yes, it means it already there on disk [15:52] so perhaps red herring somehow [15:52] because they are both mounted [15:53] zyga: yeah, it will already be mounted [15:53] PR snapd#4981 opened: ifacestate: inject autoconnect [15:53] but good hunch [15:53] but if they are mounted [15:53] how can we have no plugs? [15:53] mvo: the PR: https://github.com/snapcore/snapd/pull/4981 [15:53] PR #4981: ifacestate: inject autoconnect [15:54] part of journal after affected snapd restart https://www.irccloud.com/pastebin/Rp3GCuu2/ [15:55] we add snaps first, reload connections next [15:55] whatever could cause this is beyond me now [15:55] though this code doesn't log the fancy broken snaps [15:55] so ... maybe we can merge that PR and rebuild core [15:55] for a low-tech reproducer [15:57] zyga: yeah, I think that will definitely help [15:57] PR snapd#4978 closed: overlord,interfaces: be more vocal about broken snaps and read errors [15:57] PR snapd#4979 closed: overlord,interfaces: be more vocal about broken snaps and read errors (2.32) [15:57] mvo: what do we need now? [15:57] Chipaca: 4845 is probably interessting for you [15:57] mvo: ppa build trigger + core build trigger? [15:57] pstolowski: I don't understand that code, a change can be about many snaps [15:57] Chipaca: (mark asked about when we get version info for c-n-f :) [15:57] zyga: i've just got state.json from the user who reported this originally.. nothing suspicions there and since we're able to reproduce we won't learn anything new from it I guess [15:57] pstolowski: we need to add an autoconnect for each snap, not just for the first [15:57] zyga: yeah, let me do this [15:57] zyga: vendor-sync first [15:57] pstolowski: yeah, I think it's all too easy to reproduce [15:58] mvo: thank you [15:58] pedronis: you're right... [15:58] mvo: I can work on this overnight but I need to break now, I'm on the phone all the time so if you have ideas I can come over [15:58] I just need to manage kids for a while [15:58] zyga: sure [15:58] zyga: thanks for all your help [15:59] zyga: I merge this now and once the new edge is ready I can re-run your script. [16:00] pstolowski: I left a comment in the PR [16:00] mvo: please backport https://github.com/snapcore/snapd/pull/4969 [16:00] PR #4969: interfaces: make system-key more robust against invalid fstab entries [16:01] ohhh shit [16:01] forgot to squash [16:01] PR snapd#4969 closed: interfaces: make system-key more robust against invalid fstab entries [16:01] mvo: +1 [16:01] mvo: sorry, I didn't mean to :( [16:02] PR snapd#4976 closed: cmd/snap-mgmt: remove timers, udev rules, dbus policy files (2.32) [16:02] also there's strange branch on the main repo: revert-4969-system-key-robustness [16:03] I think that was me clicking on the revert trying to undo [16:03] we can remove it [16:05] zyga: no worries, I can cherry-pick one-by-one, its not that terrible [16:05] zyga: go and take care of the kids :) [16:05] yeah, I'm going now [16:05] ttyl [16:07] PR snapd#4982 opened: interfaces: make system-key more robust against invalid fstab entries (2.32) [16:11] PR snapd#4983 opened: osutil/sys, client: add sys.RunAsUidGid, use it for auth.json [16:11] mvo: ^ your old friend 'drop privileges' [16:12] p.s. when are we moving to 1.10 :-) [16:12] Just observation from a phone: it cannot be a mount issue as I had both snaps mounted [16:13] So something else rejected the lxd snap (validation perhaps) [16:13] huh, launchpad giving me 'address unreachable' [16:14] and it's back === phoenix_firebrd_ is now known as phoenix_firebrd [16:40] unless we have grown something that unmounts things [16:46] pedronis: addressed your feedback + made a small addition to injectTasks helper, not strictly a bug, but I think it's good if extraTasks wait for main task - only affected tests at the moment that had very few tasks and no one waited for mainTask [16:51] hi, where should i open a snapd bug? [16:53] cmars: if it's snapd, https://bugs.launchpad.net/snapd/+filebug [16:53] cmars: if it might be snapd, but it might be something else, https://bugs.launchpad.net/snappy/+filebug [16:53] and then we assign it as appropriate [16:53] Chipaca: thanks! [16:53] cmars: what did you break _now_? :-p [16:54] Chipaca: asking for a friend :) [16:54] :-) [16:56] * kalikiana wrapping up for the day [17:09] holy potato, that's a lovely stacktrace [17:09] zyga: i've run your bug.sh and got https://pastebin.ubuntu.com/p/SJTK4qP5Dc/ [17:10] that's another bug [17:10] I also got it [17:10] run it again [17:10] ah, ok [17:13] zyga, pstolowski fwiw, i triggered the new core build, we should have more debug info soon (~15-30min) [17:13] that's very good, thank you [17:13] ty [17:28] pstolowski: I noticed something weird, at the beginning of doSetupProfiles we do addImplicSlot but I don't see that in doAutoconnect [17:30] do we need that or not === grumble is now known as Guest10721 === rumble is now known as grumble [17:31] pstolowski: wasn't the idea at some point that we wanted to move that somewhere else [17:35] PR snapd#4982 closed: interfaces: make system-key more robust against invalid fstab entries (2.32) [17:35] PR snapd#4983 closed: osutil/sys, client: add sys.RunAsUidGid, use it for auth.json [17:35] mvo: so, weird - new core, no error logged, still same bug [17:35] plus, for chipaca, I got this error: kwi 04 19:35:14 t470 snapd[23332]: 2018/04/04 19:35:14.803094 stateengine.go:101: state ensure error: cannot decode new commands catalog: got unexpected HTTP status code 403 via GET to "https://api.snapcraft.io/api/v1/snaps/names?confinement=strict%2Cclassic" [17:36] zyga: that's probably fine [17:37] i mean, i don't know why the store is 403'ing that endpoint, but it's not fatal [17:37] Ensure errors should block further ensure step [17:37] should not [17:37] ah, error is when I go back to stable [17:37] so expecte d:/ [17:37] * Chipaca had a panic attack for 7 seconds there [17:37] let's try the other way around now [17:38] Chipaca: yes, that 403 is a bit weird [17:38] so [17:38] stable -> edge [17:38] confirmed new core is there [17:38] but no messages [17:39] WTF [17:39] maybe it isn't broken? [17:39] I'll enable debugging [17:42] mvo: 4900 is green, also I removed the strange logging given that zyga's PR added it at a lower level [17:49] Bug #1761253 opened: Installing a network-bind snap along with core fails results in incorrect permissions === sarnold_ is now known as sarnold [17:52] * zyga adds debugging and repacks core [17:55] mvo: I added some debugging and ... at the time snapd starts there's no lxd snap [17:55] I get it now [17:55] pedronis: it's funny really [17:55] pedronis: the bug is here [17:55] it's disabled [17:55] snaps, err := snapstate.ActiveInfos(m.state) [17:56] we skip inactive snaps [17:56] that's correct [17:56] but when we re-activate them [17:56] we don't do the right thing [17:56] I suppose [17:57] well "correct" [17:57] hmm [17:57] disable removes profiles [17:57] but doesn't remove from repo [17:58] log of the issue https://www.irccloud.com/pastebin/nxyEebVA/ [17:58] so after snapd starts up, ignoring lxd, it proceeds with the 2nd half of lxd setup [17:59] kwi 04 19:54:42 t470 snapd[7142]: 2018/04/04 19:54:42.808109 taskrunner.go:403: DEBUG: Running task 2249 on Do: Make snap "lxd" (6508) available to the system [17:59] I will review that code next [17:59] we setup profiles [17:59] before ? [17:59] but the snap is not active [17:59] pedronis: thank you! [18:00] it's again an issue that active is conflating too many things [18:00] pedronis: addimplicitslots is only relevant for repo.AddSnap call below [18:01] pedronis: where? I don't see it yet [18:01] zyga: we setup-profiles already so the snap has profiles but is not active [18:01] zyga: nice catch! about the inactive ones [18:01] so when we restart we don't load it [18:01] pedronis: I think in this run lxd was security setup before the restart [18:01] because I don't see any tasks for it [18:01] in the repo [18:01] but it wasn't active yet [18:01] so it has profiles but is not active [18:01] so it wasn't added [18:01] but we don't add it to the repo [18:01] when is made active [18:02] and no plugs/slots [18:02] and thus no connections [18:02] we should add inactive snaps but don't build their profiles IMO [18:02] pedronis: ^ [18:02] because if core hadn't restarted it would be still in the repo [18:02] just inactive [18:03] well, inactive in the state, in the repo there's no such notion [18:03] well, normally it's already there [18:03] we cannot blindly add it [18:03] does remove remove the snaps from the repo? [18:03] disable removes profiles, enable adds them [18:03] but neither affects the repo [18:03] but that is profiles [18:03] not repo [18:03] also I'm talking about remove [18:03] pedronis: AFAIR it does not, let me lookg [18:04] *look [18:04] pstolowski, pedronis 4981 looks good to me know, WDYT? [18:05] zyga: I wonder why are we seeing thins only now though, we should have had this form of problem since a long while [18:06] more paralellism, restart at the "unlucky" time [18:06] that part of parallelism was always there [18:06] just lxd and snapd sending out a new release close to each other? [18:06] maybe [18:08] anyway afaict we AddSnap when we setup profiles and at start [18:08] so in theory something should be added if it has profiles [18:08] sadly that != active [18:09] pedronis: setup profiles adds the snap to the repo [18:09] and removes any old snap from the repo [18:09] remove profiles removes the snap from the repo [18:09] we have no memory of the setup profiles add though [18:10] only later the get active [18:10] no becaues that's just in memory [18:10] that's not the state [18:10] I know [18:10] that's our problem [18:10] the pre-restart remembered it [18:10] yes [18:10] I agree, [18:10] mvo: I think we know what's the issue now but ideas on how to address it are welcome [18:11] pedronis: so what is that state that lxd is in [18:11] it's not active yet [18:11] is not active [18:11] but we setup profiles for it [18:11] but had profiles setup [18:11] what do we call that state [18:11] but that's not a state tracked in state [18:11] pending? [18:11] right, I'm trying to invent a new name [18:11] we don't track that [18:11] but all old snapds won't track that [18:11] so ... [18:11] a bit of a chicken and egg [18:11] removeProfiles [18:12] also remove from repo [18:12] yes [18:12] so active is almost right except when not [18:12] setup ensures we have the latest snap in the repo and generates profiles [18:12] remove ensures we don't know about it and removes profiles [18:12] haha, yes :) [18:13] pedronis: one more super fun fact [18:13] pedronis: has-profiles flag is not per revision [18:13] can we add the same snap again? [18:13] what happens in that case [18:13] add no, it would clash [18:13] remove and add yes [18:13] wel [18:13] well adding a snap that's empty is a no-op [18:13] we only really track interfaces [18:14] anyway it doesn't help [18:14] because our problem is reloading connections [18:14] part of the logic remembers revisions (via snap info) so that's why we remove/add snap on setup-profiels [18:14] *profiles [18:14] pedronis: I think my suggestion is still correct [18:14] pedronis: load all snaps into repo [18:14] pedronis: but setup security only for those that are active [18:14] this would fix this issue [18:15] (only on snapd startup, that is) [18:15] then we have the right state to reconnect [18:15] and to setup stuff later [18:15] though... if inactive, which is "latest" [18:15] which revision to pick? [18:16] sounds like we need ProfileRevision in SnapState [18:16] we have Current but that a guess [18:16] if we are in the middle of other ops [18:17] otoh is probably what we use anyway [18:17] can we add ProfileRevision in a way that will work on refreshes from snapd that have no idea about it [18:17] well we use it only if not Active [18:17] by definition if Active ProfileRevison == Current [18:18] zyga: if you add all snaps to repo and not just active, that will make their slots/plugs available to the system, no? [18:18] pstolowski: we cannot add all, we need to add one of each [18:18] pstolowski: which revision should we add? [18:18] pstolowski: and besides, it's still a bit broke [18:19] it will also create problem with autoconnect [18:19] zyga: ah, i see what you mean [18:19] we shouldn't atuoconnect to disabled snaps [18:19] pstolowski: even if we add "all" then snap interfaces will tell you about snaps that are disabled [18:19] pstolowski: that's not supposed to happen [18:19] pstolowski: what we need to is to ensure that lxd can be installed the same way if core wasn't restarting [18:19] track something we don't [18:19] zyga: yes, exactly [18:20] (about interfaces whose snaps disabled) [18:21] kwi 04 20:21:35 t470 snapd[11725]: 2018/04/04 20:21:35.357788 snapstate.go:1696: DEBUG: skipping inactive snap lxd with snap state &{app [0xc420497400 0xc420497480 0xc420497500] false 6492 edge {false false false false false false false false false false false} map[lxc:0xc42031b380] false true 0} [18:22] some explicit debugging to confirm this [18:26] kwi 04 20:26:08 t470 snapd[16117]: 2018/04/04 20:26:08.840805 snapstate.go:1696: DEBUG: skipping inactive snap lxd with snap state &snapstate.SnapState{SnapType:"app", Sequence:[]*snap.SideInfo{(*snap.SideInfo)(0xc4200c7180), (*snap.SideInfo)(0xc4200c7200), (*snap.SideInfo)(0xc4200c7280)}, Active:false, Current:snap.Revision{N:6492}, Channel:"edge", Flags:snapstate.Flags{DevMode:false, JailMode:false, Classic:false, [18:26] TryMode:false, Revert:false, RemoveSnapPath:false, IgnoreValidation:false, Required:false, SkipConfigure:false, Unaliased:false, Amend:false}, Aliases:map[string]*snapstate.AliasTarget{"lxc":(*snapstate.AliasTarget)(0xc420322360)}, AutoAliasesDisabled:false, AliasesPending:true, UserID:0} [18:26] with field names [18:27] zyga: I see two possible fixes, either we do something like ProfileRevision in snapstate use that and Active and Current to decide which snaps to add to the repo. or we teach doLinkSnap to AddSnap if it's not there yet but then it would need to load the missing connections as well [18:27] and it breaks the ifacestate/snapstate separation [18:27] * zyga looks at doLinkSnap code [18:28] it's already big and in the wrong package [18:28] the appeal is not needing more state [18:29] zyga: there's no cheap way to look at disk and be sure if a snap has profiles and for which revision? [18:29] (anyway that's not our usual approach) [18:29] pedronis: no, we'd have to be "creative" but it's terrible [18:29] stgraber: I wonder if your issues are at all related to https://forum.snapcraft.io/t/snapped-lxd-has-stopped-working-aa-exec-permission-denied/2356/34 [18:29] jdstrand: btw: we understand that bug very well now [18:29] it's unfortunate coincidence but easy to trigger [18:29] not a new bug in any way [18:30] zyga: you are saying that stgraber's issue is a different, new one? [18:30] jdstrand: we understand issues with missing interfaces that cause lxd issue with aa-exec [18:30] jdstrand: there are probably 3 different bugs [18:31] pedronis: 3? [18:31] zyga: to be clear stgraber's was related to fresh install and seem to be hitting all of a sudden. mine was refresh. [18:31] jdstrand: yes [18:31] ah, that's another bug perhaps [18:31] counting this one as well: https://forum.snapcraft.io/t/2-0-lxd-snap-fails-on-sytems-with-partial-apparmor-support/4707 [18:31] deb -> snap missing task #1st bug [18:32] that hits fresh installs [18:32] core, lxd refresh -> #2nd bug [18:32] core, lxd refresh (hook cannot talk to snapd) -> #3rd bug [18:32] core, lxd refresh (missing interfaces on lxd) -> #2nd bug [18:32] that's what we konw [18:32] ah, so 4 with something to do with aa-exec [18:32] ah, the partial apparmor is another one [18:33] man, fun day :) [18:33] pedronis: reading doLinkSnap now [18:33] zyga: this is the one I was referring to: https://forum.snapcraft.io/t/auto-connected-interfaces-missing-on-initial-snap-install/4850 [18:33] we havea PR to fix #1 [18:33] jdstrand: that's #1 [18:33] ok [18:34] I don't think we need to do anything about https://forum.snapcraft.io/t/2-0-lxd-snap-fails-on-sytems-with-partial-apparmor-support/4707 (I justify why in the topic) [18:34] I kind of feel this is something we need to think with a fresh mind [18:34] that isn't really a snapd thing. the snap can workaround it while the kernel fix makes its way out there [18:34] jdstrand: to be honest it seems we discovered #2 and #3 because there were snapd and lxd release close to each other [18:35] jdstrand: since you are here: where's the non-abstract x11 socket normally? [18:35] they are both preexisting bugs [18:35] yes, it's an old bug [18:35] with very unlucky timing [18:35] zyga: /tmp/.X11-unix/ [18:35] jdstrand: aha [18:35] /tmp/.X11-unix/X0 [18:35] bummer [18:35] indeed [18:36] why not /run? [18:36] man that's so sad :/ [18:36] hysterical raisins [18:36] I wanted to refactor snap-update-ns to run pre pivot [18:36] then we could save that socket [18:36] can we bind mount the socket from hostfs? [18:36] sure [18:36] would that be okay-ish? [18:36] that is the request [18:36] so x11/desktop/whatever interfaces could add a mount profile [18:37] x11 and unity7, yes [18:37] note that x11 and unity7 are often declared together [18:37] for /run/snapd/hostfs/tmp/.X11-unix/ -> /tmp/.X11-unix/ [18:37] aha, good point, we don't handle duplicates [18:37] well [18:37] we do [18:37] but not how we want it here [18:37] jdstrand: I understand the issue now [18:38] jdstrand: how can I disable the abstract socket for testing? [18:38] jdstrand: to know I fixed it [18:38] jdstrand: it seems simple (1 day work) [18:38] zyga: I think you mean /var/lib/snapd/hostfs/tmp/.X11-unix/, no? [18:38] zyga: otoh, idk. you might ask the desktop team [18:38] ah, yes :) [18:38] sorry, just tired now [18:39] greyback: do you know how to disable the x11 abstract socket on bionic, for testing? [18:39] pedronis: yes, with 2 and 3 I suspected it had to do with the fact the lxd and core were updated within the same refresh cycle, which is why it has been sporadic [18:40] that will be a nice bug to have fixed [18:40] the 3 that is will be nice together :) [18:40] meh [18:40] all 3 fixed will be nice :) [18:42] mvo: around? [18:44] zyga: yes, was just reading backlog [18:45] zyga: so the bug is that snapd restarts and we loose state about lxd we only had in memory? [18:45] yes [18:45] I'm summarizing this in a new thread [18:45] zyga: if so, would it make sense to force core refreshes before everything else ? [18:45] mvo: ok, I got +1 from pedronis on autoconnect fix, but travis failed on device-registration test.. restarted the tests [18:46] pstolowski: thanks! [18:46] mvo: yes but it's still ugly that we don't have the full state [18:46] zyga: on disagreement, just thinking aloud what we can do short term [18:46] zyga: s/on/no/ [18:46] zyga: sorry [18:46] https://forum.snapcraft.io/t/summary-of-core-lxd-refresh-bugs-discovered-today/4869 [18:47] mvo: I think it's a viable fix [18:47] mvo: it's a bit unclear what does that mean [18:47] pedronis: if core is refreshed: [18:47] force core before anything else [18:47] pedronis: abort other refreshes [18:47] that doesn't make sense [18:47] pedronis: schedule refresh of all snaps after reboot [18:47] (restart) [18:48] so "snap refresh core lxd" becomes "snap refresh core && snap refresh lxd" [18:48] (internally) [18:48] but you can do snap refresh core [18:48] snap refresh lxd [18:48] yes [18:48] and lxd would say "changes in proress (core)" [18:48] I'm saying for short term [18:48] .4 fix [18:48] not forever [18:49] this would fix more than one bug [18:49] it's not a trivial fix [18:49] (it would fix all of them actually) [18:49] if it's not trivial then that's not a good solution [18:49] so just to double check, the state we are loosing is generated when the tasks for the refresh(core,lxd) is generated? and then we restart and those bits are gone? [18:49] but I think it could be easire than a proper fix for each one [18:49] it would not help if we make all other tasks wait for core refresh tasks first? [18:49] mvo: the state we lose is interface repo [18:49] mvo: if snapd hadn't restarted it would be in correct condition [18:50] mvo: not fully, (see the autoconnct task issue) but mostly [18:50] my point is that splitting up autorefresh to behave that way [18:50] is it because lxd maybe inactive at this point? because if so if we ensure that all refresh tasks wait for core things would be ok as well, no? [18:50] is not that simple [18:50] mvo: the low level issue is that when snapd starts and constructs the repo we don't know that we need to add some revision of lxd (inactive) to the repo [18:50] zyga: have we considred a rollback of the core snap? it's broken all initial installs. [18:51] zyga: so that is because the lxd refresh is half-done at this point, right? [18:51] dpb1: can you provide more data please? [18:51] mvo: yes [18:51] dpb1: to answer your question, we considered rollback of core and try not to break it [18:51] dpb1: how is it breaking initial installs? [18:52] I think is saying because of bug #1 [18:52] Bug #1: Microsoft has a majority market share [18:52] tinarussell> [18:52] [18:52] we should rollback core [18:52] to 31.x [18:52] because all people trying snaps right now will it [18:52] hit it [18:52] dpb1: ah [18:53] dpb1: I understand your point now [18:53] dpb1: yes, I think that's a viable solution [18:53] mvo: ^ [18:53] jdstrand: ^ [18:53] cachio: can you test that a revert of core from 2.32.1 -> 2.31.2 works ok? and if so I think we need to move stable for core back to 2.31.2 [18:53] JamieBennett: ^ [18:53] zyga: ack, I think we need testing but once that is confirmed +1 [18:53] mvo: yes [18:53] that's a very good outcome for 20:53 [18:53] zyga: also we need to ensure the store is prepared for this (same heads up as with a regular new stable) [18:53] and we need .3 with a few more fixes [18:54] zyga: indeed [18:54] or even 2.31.x with some helper fix [18:54] if needed [18:54] and we could do a post mortem to learn from this [18:55] zyga: mvo: as long as we are sure the rollback is safe and will indeed fix the issue, I'm +1 although I believe we have a fix for #1 very close? [18:55] mvo: zyga: reverting core? Did we miss something? [18:55] JamieBennett: FYI: issues numbered: https://forum.snapcraft.io/t/summary-of-core-lxd-refresh-bugs-discovered-today/4869 [18:55] cwayne: ^ [18:55] zyga: yes, saw that, thanks [18:55] JamieBennett: two out of three are almost done [18:55] zyga: you asked my opinion. I'm not sure if it was meant for me. if it was, what do you want my opinion on? [18:55] mvo: so is it safer to rollback or push the fix? [18:56] JamieBennett: we don't have a fix yet [18:56] JamieBennett: I would (naturally) prefer the fix, but we have one open issue left and we don't know how widespread that is [18:56] jdstrand: just FYI [18:56] mvo: zyga: the issue of waiting for core, is also that is can happen in the other direction to, snap refresh lxd, snap refresh core, it's unlikely but we would need to wait to deactivate core until there is no interesting pending task [18:56] * jdstrand nods [18:57] pedronis: yes, that's a good point [18:57] pedronis: I would prefer a fix that tracks state correctly [18:57] it feels conceptually easier to explain [18:57] than the waiting workaround [18:57] basically it's doable but it escalates quickly a bit into a tangle of cross checks [18:57] and also more correct technially [18:58] yes [18:58] I will look at tracking this in the state, I can iterate locally without pushing anything [18:58] but I probably will bail soon, I need to think and rest [18:58] zyga, pedronis can we have a quick HO to sync up on possible fixes? or is it too late? [18:58] (no need to rebuild core to test) [18:59] mvo: sure [18:59] in 2 min in standup [18:59] stgraber: ^ [18:59] stgraber: would be good to test out side on that too [19:00] mvo: yes [19:03] stgraber: *our side [19:05] dpb1: not sure how to test that, there's no flag to "snap install lxd" that I can pass to tell it to install a particular version of the core snap [19:06] wtf https://www.irccloud.com/pastebin/BIrJMfkH/ [19:08] zyga: ah yeah, we've seen that occasionally before, it's a weird one [19:09] stgraber: we understand it now [19:10] cachio: ping [19:11] zyga: oh, that explenation makes sense, I always thought it was our weird mntns setup that was causing that, but the socket not being reachable because of the restart makes sense [19:12] stgraber, you can snap install core manually though [19:12] stgraber: you may have noticed me talking about lxd yesterday. that is one of the 3 issues being discussed above [19:13] ackk: not for this bug, no [19:13] ackk: this bug specifically happens when the core snap is auto-installed when you install your first snap [19:13] ackk: installing core seperately avoids the issue :) [19:13] oh :) [19:14] jdstrand: ah, glad that the different issues have been tracked down and untangled now :) [19:15] stgraber: yes. 'lucky' you it is all happening with the lxd snap. the fixes will make things more robust for everyone :) [19:16] yeah, "lucky"... timing is kinda crap though as we're doing the social media round for LXD 3.0 tomorrow :) [19:17] for now I updated our install instructions in that post to "snap install core && snap install lxd" with a link to the snapd issue on the forum, that should save us from most bug reports :) [19:17] nice [19:21] zyga, hey [19:22] mvo: #4981 merged [19:22] PR #4981: ifacestate: inject autoconnect [19:22] PR snapd#4981 closed: ifacestate: inject autoconnect === pstolowski is now known as pstolowski|afk [19:23] PR snapcraft#2046 closed: lxd: specify arch in lxc image list command [19:27] pstolowski|afk: thanks! [19:28] PR snapd#4984 opened: ifacestate: inject autoconnect (#4981) [19:31] JamieBennett: I updated the thread with more details and links to PR that fix the issue [19:31] cachio: did you see the question from mvo [19:31] cachio: we need to test a rollback of core to 2.31.x [19:31] JamieBennett: I don't think we have a rollback decision yet but it is a tempting choice, mvo will discuss with you when he's back [19:32] zyga, sure, I'll do it now [19:32] JamieBennett: and regardless of the rollback we will do 2.32.4 with fixes for all three issues 2 and 3 [19:32] and maybe for issue 1 [19:32] zyga, any version in specific? [19:32] cachio: latest 2.31.x so I think 2.31.2 [19:33] As above, I’m +1 if we are confident that it will not cause issues [19:33] JamieBennett: we discussed how to solve issue 2 just now and I'm working on it [19:33] JamieBennett: right, mvo said we would only rollback afte cachio gives +1 from test POV [19:33] cachio: 4205-4210 is the previous 2.31.2 [19:33] * JamieBennett nods [19:33] JamieBennett: issue .1 is still a bit more complex but we have some ideas [19:34] hey mvo [19:34] zyga: hey [19:34] mvo, from master, right? [19:35] cachio: from current stable 4325..30 to 4205..10 [19:35] cachio: i.e. 2.32.1 back to 2.31.2 [19:35] mvo, ok [19:36] zyga: the fix for "when old snapd starts the process (think: deb) and new one finishes the auto-connect task is missing" has landed in master now, I created a backport and create a new edge core with the fix so that this can be tested for real [19:36] mvo: thank you [19:38] mvo: I think we understood the snapctl socket issue [19:39] it's related to the shudown logic on restart [19:40] I don't know if there are other failure modes for running hooks while snapd is inactive though [19:43] zyga: mvo: afaict snap-confine dies if base current is not set? [19:43] pedronis: yes but it only does so if it needs to construct a new namespace [19:43] pedronis: if the snap is around (like lxd would) it has a namespace to enter [19:44] ok, so there's an issue, but again infrequent (possibly) [19:44] looking at the trace I attached it has one [19:44] DEBUG: sending information about the state of the mount namespace (keep) [19:44] this is critical [19:44] it says that the namespace is in sync with what base snap we expect to use [19:45] so we reuse it [19:46] well that code does: [19:47] // Read the revision of the base snap by looking at the current symlink. [19:47] sc_must_snprintf(fname, sizeof fname, "%s/%s/current", [19:47] SNAP_MOUNT_DIR, base_snap_name); [19:47] if (readlink(fname, base_snap_rev, sizeof base_snap_rev) < 0) { [19:47] so I suppose it would die if current is not htere [19:47] yes, that's true [19:48] if current is not there we die earlier [19:48] because /dev and what not won't be there [19:48] just unmount current and see [19:48] so we have a general problem about upgrading bases and their snaps at the same time [19:55] so this one would need conflicts and wait to be solved [19:55] mvo, I ran it manually nad I did not see any issue, should I test it in any specific arch or system [19:55] ? [20:00] mvo: zyga: I added a not about base current vs snap ops in that forum topic, it probably merits its own at some point but at least it will not get lost [20:00] thanks [20:02] mvo, https://paste.ubuntu.com/p/5DWzM6Dytf/ [20:03] 2.32.1 -> 2.32.2 -> 2.32.1 worked fine here [20:04] cachio: great, do you think you could verify on a core device as well? might be tricky to get 2.31.2 there but you can publish the core snap so "snap refresh --revision=4205 core" should work [20:05] mvo, sure, let me try it [20:05] cachio: --revision=4206 if you are on amd64 - 4209 for pi2 :) [20:06] ok, let me flash the image and I0'll try it [20:08] pedronis: still there? [20:08] I wonder if it is right still (security revision) [20:09] writing the commit message for a WIP patch [20:09] when we refresh r1 is setup [20:09] but we want r2 [20:09] and r2 is not active yet [20:10] we don't remove security of r1, just setup for r2 [20:10] so we want to store r2 [20:10] and reload r2 [20:10] * zyga actually thinks it is fine again [20:10] just tired [20:10] thank you f:) [20:10] for listening [20:12] zyga: we need to store the revision of the thing we AddSnap [20:13] pedronis: quick WIP patch https://github.com/zyga/snapd/commit/a98a927d1ae472f849ed47f6ad396cdf7d98c636 [20:13] untested [20:14] I didn't move any code around to minimize the diff as it will have to use some private functions that need to become public first [20:15] zyga: we have an issue [20:17] zyga: when we install I don't think SnapState exists in snaps until we do link [20:17] hmm hmm [20:18] and storing it without all the info breaks invariants [20:19] zyga: I fear, we need a separate security-profiles-revision map [20:20] hmm hmm [20:20] we setup profiles before we link [20:20] yea, that's the issue [20:20] we have Candidate [20:21] can we use that? [20:21] ? [20:21] that is on the task [20:21] no [20:21] ah [20:22] hmm [20:22] but don't we have a deeper issue the [20:22] setupAffectedSnaps has code like [20:22] snapstet.Get9st, affectedSnapName, ...) [20:22] we don't connect to things that are not installed [20:22] if I install a pair of snaps [20:23] would they auto-connect to each other? [20:23] or only after installation? [20:23] *after linking [20:23] only after installation [20:23] ok [20:23] yeah, I see your point [20:23] unless they are default-providers [20:23] oh [20:23] in which case we wait for some things [20:23] we install the provider first? [20:23] (fully) [20:23] not fully [20:24] but we don't do autoconnect [20:24] but link it? [20:24] aha [20:24] well [20:24] I see the problem [20:24] Issue snapcraft#1672 closed: Add pre-pull/pull/post-pull [20:24] PR snapcraft#2045 closed: many: add override-pull scriptlet [20:24] thank you for the quick insight [20:24] until they have done setup-prfoiles [20:24] anyway, as I said, I think we need a separate map [20:25] it's really an issue of snapstate has some state, and ifacestate needs some different state [20:25] note that the error from that get is non-fatal [20:25] as we had "mysterious issues' [20:25] so maybe another layer of mud needs digging [20:28] but I see we find autoconnect candidates based on the repo [20:28] so there might be stuff there [20:28] that doesn't exist in snaps yet [20:30] but then we go and setup security and that may bite us [20:30] since we get the snaps from the state [20:30] because we had that bug and gustavo wanted us to always look at the state [20:30] maybe that was not right [20:30] and maybe refreshing the repo explicitly like we do is enough [20:30] but all questions need thinking now [20:31] well there's a disagreement between snapstate and ifacestate/repo [20:31] about wha's active [20:31] active is misleading [20:31] but yes [20:31] there's disagreement [20:32] some of it is just the nature [20:32] of splitting tasks between the two [20:32] some of it could be reconciled [20:35] zyga: anyway that Get skips if there's an error [20:35] yes [20:35] so whatever the issue is not explosive [20:35] but not sure if it should, I don't know when that may legally happen [20:35] and has been there since long [20:35] yes, we made it non-explosive [20:35] zyga: I don't think we can tackle that one now [20:37] testing my WIP patch [20:37] I security-revision is stored, now running the script [20:38] it didn't work [20:38] somewhere when we remove profiles [20:38] we reset the revision [20:38] and then restart must happen after that time [20:39] kwi 04 22:38:00 t470 snapd[3451]: 2018/04/04 22:38:00.605460 snapstate.go:1714: DEBUG: skipping inactive snap lxd with snap state &snapstate.SnapState{SnapType:"app", Sequence:[]*snap.SideInfo{(*snap.SideInfo)(0xc4200b5100), (*snap.SideInfo)(0xc4200b5180), (*snap.SideInfo)(0xc4200b5200)}, Active:false, Current:snap.Revision{N:6492}, SecurityProfilesRevision:snap.Revision{N:0}, Channel:"edge", Flags:snapstate.Flags{DevMode:false, [20:39] JailMode:false, Classic:false, TryMode:false, Revert:false, RemoveSnapPath:false, IgnoreValidation:false, Required:false, SkipConfigure:false, Unaliased:false, Amend:false}, Aliases:map[string]*snapstate.AliasTarget{"lxc":(*snapstate.AliasTarget)(0xc4204f5120)}, AutoAliasesDisabled:false, AliasesPending:true, UserID:0} [20:39] SecurityProfilesRevision is zero [20:39] but then we will do setup-profiles [20:40] is setup-profiles not doing enough? [20:41] not sure I understand [20:41] the theory was that we did setup-profiles but not link-snap [20:41] but here we didn't do setup-profiles [20:41] otherwise profile revision would be set [20:42] we did, I refreshed lxd manually a few times and used jq to read the profile revision from the state [20:42] I probably don't understand the refresh logic well enough [20:42] and when we unlink the old snap we remove its profiles [20:42] and that resets the revision [20:42] and then if we restart we are in a bad state [20:43] (maybe) [20:43] that's just me guessing now [20:43] but that's always been like that [20:43] also afair we don't do that [20:43] this is interesting [20:43] https://www.irccloud.com/pastebin/t1zwTx1t/ [20:43] we setup profiles for lxd [20:44] (probably don't finish) [20:44] then core restarts [20:44] only Remove and Disable remove-profiles [20:44] so we never set it [20:44] ah, maybe my patch is wrong then [20:44] I patched it so that doRemoveProfiles resets the revision to zero [20:44] maybe it's called in other places [20:44] * zyga looks [20:45] actually, removeSnapSecurity [20:45] which is only called from removeProfilesForSnap [20:45] you should do things close to where [20:45] we do AddSnap [20:46] and RemoveSnap from repo [20:46] yeah [20:48] tweaking [20:49] not that I see other places calling removeSnapSecurity [20:50] quick diff [20:50] https://github.com/snapcore/snapd/compare/master...zyga:fix/reload-repo-inactive-snaps?expand=1 [20:50] ah [20:50] wrong, didn't push [20:51] now it's correct [20:51] well, up-to-date [20:54] mvo, same in pi2 [20:54] it works fine [20:56] zyga: fantastic, so now we can say we have a fully working snapd for opensuse :D [20:57] zyga: it looks reasonable, are we missing something obvious? [20:57] Caelum: we need to add snapd to factory and package gnome-software extensions [20:57] pedronis: testing it now [20:57] (apart that we cannot use SnapState but that's for install) [20:57] yes [20:57] fingers crossed [20:57] zyga: of course there is always more stuff to do [20:58] hmm hmm hmm [20:58] pedronis: it still didn't store the rev [20:58] on refresh it gets reset to zero [20:58] zyga: the web page didn't get rebuilt yet btw, but I guess that happens in some cron job [20:58] maybe the Get's I added just fail [20:59] Caelum: if it's not refreshed tomorrow I will poke people for a rebuild [20:59] great [20:59] should I start playing with 42.1 in my branch to get ready for 42.2 ? [20:59] zyga: you should probably log from the places you are setting/resetting it [20:59] yes, patching now [21:00] I mean 32.1 [21:00] and you already released 32.2, that's the one you wanted to update to? [21:01] Caelum: 32 is a troubled child :) [21:01] mvo https://paste.ubuntu.com/p/fqBtRBsX33/ [21:01] Caelum: we have some issues to work through [21:01] I think 2.32.4 will be good [21:07] pedronis: nope [21:07] whatever is happening I'm losing the securityt profiles revision [21:08] I added logging [21:08] is the state overwritten ever without loading? [21:08] we are setting it? [21:08] ye [21:08] yes, it's set [21:08] and not resetting it? [21:08] also seen in jq [21:09] we are not unsetting it [21:09] unless I made a silly bug in the code [21:09] cachio: looks like no issues there [21:09] I get the state from the task [21:09] then get, update and set the snap state [21:09] seen in jq? [21:10] PR snapd#4984 closed: ifacestate: inject autoconnect (#4981) [21:10] seen in jq set? [21:10] yes, I see both the revision (before running bug.sh) and unset in jq [21:10] yes [21:10] I snap refresh --edge lxd; then same to --stalbe [21:10] stable [21:10] and that sets it [21:10] (using my hacked edge core) [21:10] then refresh core, lxd to stable and back to edge [21:10] and it is gone [21:10] well, stable will write is stuff [21:10] and lose that no? [21:10] overwrite ! [21:10] oh [21:10] drat [21:10] so yeah [21:11] we need a new map [21:11] ? [21:11] but this is also scary [21:11] we cannot fix the bug in stable [21:11] yes yes, my point is that old snapd overwrites and discards this data [21:11] and it's not great [21:11] we need a map [21:11] anyway [21:11] it's one to have a field and not use it [21:11] but in general we cannot fix the bug for stable [21:11] and another to overwrite and remove a field [21:12] sure, I just didn't expect stable to erase this [21:12] but yeah [21:12] ok [21:12] I think I need to EOD now [21:12] stable will still not do the right thing [21:12] too tired and too much hectic stuff at home [21:12] I will use a new map tomorrow and test this [21:12] if this is fine I will write proper tests for everything [21:12] how shall we call it? [21:12] repostate? [21:13] the question whether, we will need more bits? [21:13] I mean should be start from day 0 [21:13] 8ball [21:13] with name -> struct [21:13] or only name -> revision [21:13] I'd go struct [21:13] but with the problem of destructive updates [21:13] it's probably irrelevant [21:14] but I'd go with struct anyway [21:14] easier to code and nicer [21:14] repostate[snapName].Revision [21:14] security-profiles-state ? [21:15] ifacestate? to tie this into the interface manager (iface-state) [21:15] not sure [21:15] we can rename it tomorrow :) [21:16] mvo: I'm EODing [21:16] let's regroup tomorrow [21:17] thank you for the insight tonight pedronis, it was invaluable [21:17] zyga: yeah [21:17] zyga: I think we need the revert [21:17] yes [21:17] let's revert, my +1 goes there [21:18] cachio: can you please run some more tests, ideally a "snap refresh --revision=4206 core" and some spread testing with such a setup? I think we need to revert tomorrow [21:18] not sure if you want to do it tonight [21:18] if we need to ack from store [21:18] zyga, pstolowski|afk unless I did something wrong in my tests the inject-task is not fully working: https://paste.ubuntu.com/p/BxmxD3VFCD/ [21:19] zyga, pstolowski|afk "2018-04-04T23:11:59+02:00 INFO cannot auto-connect plug lxd:lxd-support to core:lxd-support: no state entry for key" [21:20] zyga, pstolowski|afk lets talk more tomorrow [21:20] no state entry for key :/ [21:20] ErrNoState [21:20] sucks to be us today [21:20] ttyl [21:21] mvo: I think it's probably inserted in the wrong place :/ [21:23] mvo: pstolowski|afk: yes, for a snap that is not core, it needs to come after link-snap [21:23] not after setup-profiles [21:23] see doInstall code [21:26] oh my [21:30] mvo thanks for the test... it's a shame a spread test wasnt possible [21:53] Hey guys.. does anyone know if the nodejs part has been updated recently? [21:54] I've used the nodejs part for a while, and suddenly its started giving me this error: [21:54] Failed to run 'npm ls --global --json' for 'node': Exited with code 1. Verify that the part is using the correct parameters and try again. [21:54] even for builds that previously succeeded on launchpad === Guest20669 is now known as devil_ [22:18] geekgonecrazy, do you specify a node version? [22:20] kyrofa: yup, otherwise mass chaos in Rocket.Chat :D [22:20] unfortunately very tied to node.js versions at times [22:21] trying to locate the `npm ls --global --json` because I can't recall that ever being run when I do `npm install` locally [22:22] So my assumption was it was in the nodejs part somewhere maybe? [22:24] geekgonecrazy, when was your most recent successful build? [22:25] last night. Looking at the wiki page for the parts best I can tell last time it was updated was the 1st [22:25] zyga: to disable abstract socket add "-nolisten local" to X server arguments. Most display managers already add "-nolisten tcp" [22:25] thanks [22:25] Huh, so it doesn't involve the snapcraft update from last week [22:26] I'm not familiar with the nodejs part [22:26] That could be it [22:26] Maybe reach out to the maintainer? [22:27] well this is intriguing. I appear to have managed to get snapcraft into an infinite loop somewhere at "preparing to build desktop-gtk2" - it's sat there using 100% of a cpu core [22:27] kyrofa: any way via snapcraft command to see maintainer? I might just manually install node.js as a less ideal workaround. [22:28] geekgonecrazy, wait... I don't see a nodejs part :P [22:28] diddledan, go away, my day was good [22:28] :-p [22:28] * diddledan cuddles kyrofa [22:28] kyrofa: I didn't see one via that page, but i'm some how using one :D [22:29] geekgonecrazy, let me take a look at your yaml [22:30] kyrofa: don't judge too much :D https://github.com/RocketChat/Rocket.Chat/blob/develop/.snapcraft/snapcraft.yaml [22:31] oh duh.. plugin not part [22:33] oh ello. it finished [22:33] wasn't an infinite loop then, just a veeeery slow install [22:34] maybe it's my ISP - they've been battling a DDoS the past few days [22:34] ah yeah, desktop helpers can be a bit slow [22:35] this is on the building of the snap, rather than initial launch, popey , so I'm assuming my net connection was taking forever for snapcraft to fetch the needed bits to work out how to build [22:35] yes [22:35] building can be slow too [22:35] aah [22:35] ok [22:35] * zyga will start later tomorrow [22:36] I cannot even sleep now with all the snapd state in my head [22:39] you should share states with someone else ;-p [22:39] shared state is so much more reliable, right? :-D [22:39] kyrofa: btw its non-working with node-engine: 8.9.4 (which used to work) It might be something on our side.. Maybe I need to force a newer version of npm to be installed with that older version of node [22:40] zyga: hey === mwhudson_ is now known as mwhudson [22:47] kyrofa: looks like its: https://github.com/snapcore/snapcraft/blob/master/snapcraft/plugins/nodejs.py#L250 that for some reason with node.js 8.9.4 isn't working. Not sure why now. Is there any way to use the nodejs plugin and keep it from doing the npm install? :D [22:48] omg!!!!!!! irccloud 0.6.0 lets you connect to slack channels!!! p.s. irccloud 0.6.0 works in a snap [22:49] * diddledan pokenprod popey [22:49] PR incoming! [22:49] diddledan: iirc, slack is discontinuing the irc gateway [22:49] pretty soon, at that [22:49] diddledan: srsly? [22:49] no more chmod nonsense? [22:50] nope, the new core fixes it by making it an EPERM rather than SIGKILL [22:50] Oh yes, time for a daily hug for jdstrand [22:50] \o/ [22:50] * jdstrand hugs popey [22:51] nacc: it works with a connector installed in slack. i.e. it does not use the IRC gateway [22:51] wow funky [22:51] diddledan: oh interesting! [22:52] you need an admin of each slack workspace to enable the plugin tho [22:52] diddledan: ah [22:52] looking forward to the pr :) [22:54] geekgonecrazy, sorry, phone call [22:54] geekgonecrazy, yeah, that hasn't changed for quite some time [23:04] well fooey.. my irccloud desktop repo isn't linked [23:06] popey: https://github.com/snapcrafters/irccloud-desktop/pull/6 [23:06] PR snapcrafters/irccloud-desktop#6: update to irccloud 0.6.0 [23:09] aah. the icon doesn't appear [23:43] PR snapcraft#2049 opened: many: add override-stage scriptlet [23:56] kyrofa: thanks for taking a look. Went ahead and just did the whole manual node.js install. For what ever reason that step where it does npm ls --global --json in combination with one of our npm dependencies... just blows up