=== epod is now known as luk3yx === chihchun_afk is now known as chihchun === chihchun is now known as chihchun_afk [04:15] plars: canceled the job [04:15] ijohnson: thanks, feel free to resubmit, just keep that section of the yaml from the example [05:17] morning [05:46] Good morning [06:09] zyga: hey [06:10] mvo: hi [06:11] mvo: zyga: i'll be out for a while in the morning, sent a message in the forum [06:13] hey mborzecki [06:13] mborzecki: no worries, thanks for letting us know [06:13] zyga: good morning to you as well [06:16] Hey mvo [06:17] Mvo some bad news from last evening [06:17] Mvo: snapd broke in adt [06:17] There is a bug with the details [06:17] I’m with the dog outside so no links, sorry [06:17] Just sort by snapd but numbers [06:21] zyga: oh, interessting [06:23] zyga: reading now, we need to investigate [06:24] having snap changes output would be good [06:26] mvo: I can look back home [06:29] zyga: it really looks a bit mysterious, no trace of core there [06:30] zyga: that sounds more like a bug in out testsuite [06:35] good morning! [06:38] mvo: note that it happens outside of the test suite [06:39] It affects docker, for example [06:41] Hey dot-tobias :-) [06:41] Hi zyga 😊 [06:42] zyga: oh, hmmmmm === chrisccoulson_ is now known as chrisccoulson [06:47] I'm back home; I'll make coffee and see what I can find [07:11] back from breakfast === pstolowski|afk is now known as pstolowski [07:12] morning [07:12] good morning pawel! [07:15] hey pstolowski [07:15] 6706 needs a review [07:18] mvo: doing [07:19] ta [07:20] done [07:23] ta [07:44] zyga: the indent in 6706 was added to make it easier to read, is that bad in some way? [07:44] it looks like there is an extra space there [07:45] perhaps tabs spaces? [07:45] zyga: it looks like its just a strange formating of the diff, let me double check with side-by-side view [07:46] zyga: it looks ok here (unless I miss something) the first "ifneq" is unchagned, maybe this is why it looks strange [07:47] https://usercontent.irccloud-cdn.com/file/sqaMmc5M/Screenshot%202019-04-11%20at%2009.47.03.png [07:47] does the line with the + has a space up front? [07:48] oh [07:48] mvo: the logic is wrong [07:48] no? [07:48] you want ifeq, not ifneq [07:48] because when you have two ifneq lines , one will always match [07:48] ah, I see, they *are* nested [07:48] mvo: let me suggest something [07:49] zyga: hm, let me look. if thats so then the test is also broken [07:52] zyga: I can change it to use "dpkg-architecture -qDEB_TARGET_ARCH_BITS" [07:52] mvo: added one more comment [07:52] mvo: +1 on that idea [07:52] mvo: on both the test and the build rules [07:53] zyga: ok, let me look at this [07:56] mvo: we have one more bug report https://bugs.launchpad.net/snapd/+bug/1824242 [07:57] perhaps something that warrants an SRU? [07:57] Bug #1824242: snapd can't be purged with latest AWS Xenial AMI [08:00] zyga: yeah, if that is missing from 2.38 we definitely need to add the cleanup in 2.38.1 [08:00] zyga: 2.38.1 will also include 6706 [08:00] sorry for the busy morning [08:00] I am looking at the adt issue now [08:02] zyga: thanks, +1 [08:03] yeah, 2.38 has the right "rm -rf" call [08:03] let's update the bug to reflect that 2.38 fixes it [08:04] zyga: I just did that [08:09] mvo: hi, I added this card: https://trello.com/c/tx49EL3F/221-add-black-box-test-testing-memory-mappings-mmap-sizes-and-max-resident-memory-against-decided-limits-budget-for-snapd-and-snap-i [08:12] Hi gang. We're suddenly seeing this bug: https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/1824188 [08:12] Bug #1824188: Software tab is empty on clean 19.04 install [08:12] GNOME Initial Setup is getting a "connection reset" from snapd when trying to get the list of software for promotion at the end of set up during first login [08:12] Could you take a look? [08:12] or lend a hand [08:13] hey willcooke [08:13] " Failed to get featured snaps: Failed to read from snapd: Error receiving data: Connection reset by peer" [08:13] I think we will have to, there are some more 19.04 reports coming in yesterday [08:13] I'll prep a VM as soon as I'm done looking at ADT issues [08:13] thanks zyga [08:13] I had snap list failing also yesterday, but disk was full so maybe a side effect of that [08:13] I'll add some more logs to the bug [08:13] thx zyga [08:13] thank you! [08:38] zyga: I updated 6706 - anything new on the ADT issue? [08:40] mvo: not yet, pulling many things at the same time makes my link slow; just constructed adt image [08:41] zyga: ok [08:41] zyga: it was specifically cosmic, right? [08:41] yes [08:41] I just spawned the test [08:41] we'll know shortly [08:42] ta [08:47] re [08:49] hey mborzecki - welcome back [08:49] mborzecki: want to look at 6706? should be easy [08:51] eh [08:51] I love hand-holding adt [08:51] nothing works in that thing [08:51] obviously it cannot talk to the network [08:51] * zyga is debugging [08:52] mvo: sure, looking [08:52] ah that's the PIE thing [08:54] mvo: do you know how to spawn adt in qemu with bridged network or something that is not just broken? [08:55] I used [08:55] autopkgtest -s -U snapd_2.38+18.10.dsc -- qemu ./autopkgtest-cosmic-amd64.img [08:55] this doesn't work [08:55] zyga: oh, thats strange, that should work [08:56] let me grab some logs [09:08] mvo: posted some comments there [09:08] mvo: basically, i think it would make sense to include s-c in the test and make sure it's built with PIE, because it's C, so it's bad and all that :) [09:48] zyga: is snap confine poking $SNAP_DATA now? see a new denial in the selinux branch [09:48] mborzecki: details please [09:48] type=AVC msg=audit(1554975937.636:129): avc: denied { getattr } for pid=1099 comm="snap-confine" path="/var/snap/test-snapd-service/x1" dev="vda1" ino=393657 scontext=system_u:system_r:snappy_confine_t:s0 tcontext=system_u:object_r:snappy_var_t:s0 tclass=dir permissive=1 [09:48] zyga: ^ [09:50] ha, maybe it's the cwd changes in snap-confine [09:51] mborzecki: so, snap-confine always did that [09:51] mborzecki: sounds good, go for it [09:51] getattr? [09:51] mborzecki: or in a separate PR, no strong opinion [09:51] mborzecki: perhaps it is a combination of two things: [09:51] this being a service [09:51] zyga: yes, fstat probably [09:51] so HOME is set to $SNAP_DATA [09:51] and the cwd changes [09:52] since apparmor does not mediate fstat at all [09:52] it was not showing up [09:52] zyga: i see we set WorkingDirectory for generated services to $SNAP_DATA [09:52] why did it not show up in the selinux test? [09:52] so that's probably it [09:52] mborzecki: I agree [09:53] mborzecki: is the selinux test capable of seeing the denials now? [09:53] mborzecki: was it one test or a restore-time check in all tests? [09:53] zyga: intersting how this test will be able to detect this :) [09:54] zyga: it's the selinux-clean test, specificaly targeted to catch any denials that may come up [09:54] I see [09:54] perhaps we should move the check to post-restore stage [09:54] like we do with apparmor, I believe [09:55] zyga: yeah, seems like we'll be able to update the policy proactively now, rather than waiting for bug reports in rhbz [09:55] +1 [09:59] popey: on the 19.04 opengl issue, as you know I asked for some help on twitter and got a very useful response [10:00] Excellent [10:00] Good to hear twitter isn't just for nazis :D [10:00] popey: I will look at what the situation is inside a virtual 19.04 system in the context of a separate issue that was raised by willcooke today; after that I should be able to make some progress towards understanding the cause of the opengl regression [10:00] Thanks for letting me know! [10:01] popey: arguably that was even more useful than insta-ordering a gtx on amazon [10:02] because I got feedback from more diverse collection of systems [10:02] Wellll.... [10:02] I mean, okay, but if we had some QA on these GTX systems *before* I found the issue, that argument doesn't hold. [10:08] We don't do QA like that I'm afraid [10:08] but we also know about the shortcomings of the solution we have now [10:08] and we have the improvements in the roadmap [10:08] for this cycle (though arguably we will be late) [10:09] I'm saying that the wheels started spinning already [10:10] mborzecki: some failures from autopkgtest in qemu: [10:10] https://www.irccloud.com/pastebin/vUlozfth/ [10:10] perhaps insufficient mocking? [10:11] running more tests [10:12] hm maybe [10:12] btw. there's no 18.04-32 in the spread suite? [10:15] zyga: the nvidia card is in an older system i have here, i can look into 19.04 later on if i manage to install it on a usb stick [10:15] mborzecki: I have useful data already, I need to process it first [10:15] zyga: ah, that's fine then [10:16] zyga: has anything changed with the libs again? [10:16] not sure yet [10:17] pedronis: hey, toughts on snap debug timings --ensure=... output: https://paste.ubuntu.com/p/dXZqyDxVG6/ ? [10:19] pstolowski: thx, need to think a bit. I will look at the PR producing the data today [10:30] brb [10:41] Chipaca: hey, are you looking at the 19.04 empty software tab bug? [10:41] zyga: ye [10:41] great, thanks! [10:48] why is journalctl still broken on 19.04? [10:49] stuff is going to /var/log/syslog skipping journalctl entirely [10:49] this is a breaking change :-( [10:49] woah [10:49] that's odd [10:50] btw. 19.04 beta ok to use for install or should i rather try a daily one? [10:50] I downloaded daily but did not install it yet [10:50] zyga: anything new on the ADT issue? I ran it in qemu 3 times and no failure :/ [10:50] same here, more iterations [10:50] I updated the bug report a moment ago [10:52] willcooke: why is journalctl not showing logs on 19.04, do you know? [10:55] Chipaca, we had this problem before when we were trying to work out why it wasnt installing some snaps. I don't think we ever got to the bottom of it. It seems that it was just snapd that wasn't logging properly, even with the logging turned right up. Was it that it was just taking a while to get flushed to the log? [10:56] willcooke: the logs are in /var/log/syslog, immediately, but journalctl always reports empty [10:56] seb128, any ideas? ^ [11:04] PR snapd#6706 closed: ubuntu: disable -buildmode=pie on armhf to fix memory issue <⚠ Critical> [11:05] I cherry picked 6706 now for 2.38 [11:05] mvo: will you also tackle the PIE of snap-confine? [11:06] zyga: not right now, also not as part of 2.38.1 - no time today. but feel free to grab it (or maybe mborzecki) [11:11] Chipaca, willcooke, I guess snapd doesn't use the proper logging api/could integrate to the journal? a problem on the snapd side for sure [11:11] seb128: how does that explain difference between 19.04 and prior releases? [11:11] seb128: ? how can it be "a problem on the snapd side for sure" when the exact whatever-it-is-snapd-does works on anything other than 19.04? [11:12] Chipaca, like you are sure it works on fedora 30 or other distros with the same systemd stack? [11:13] ddstreet: re failing autopkgtest - do you have a machine around that you can ssh into that has the failure? I would love to see the "snap changes" output of it [11:13] zyga, I don't know what snapd is doing, maybe it's relying on old syslog and should integrate better with systemd journal? [11:13] well, I guess it could be an issue on the systemd side [11:13] ddstreet: fwiw, it looks like things started to fail at 2019-04-03 [11:13] seb128: I see, perhaps the question is: how should snapd log? [11:13] it should integrate to journalctl [11:13] imho [11:14] it could be that it's doing things in a "legacy" way and that regressed in the journal side so a bug [11:14] anyway, I think that needs looking at by someone who understand the snapd logging code and what is done exactly [11:14] mvo, let me see if i can repro it in a system on the canonical vpn [11:14] snapd, and all snaps that have daemons, rely on the behaviour that a service printing to stdout/stderr ends up in the journal [11:15] seb128: ^ [11:15] Chipaca, snapd re-exec itself? could that confuse the systemd units or something [11:16] Chipaca, zyga, in any case it's an issue specific to snapd so unsure what it's doing differently but imho the best person to debug that is someone who knows what snapd is doing exactly [11:16] or maybe talk to foundation if you believe the issue is on the journal side [11:17] but they are probably going to need a testcase/details on what snapd is doing and how it's different from other services that work fine [11:17] gah [11:17] ok, the problem is different [11:17] seb128: 'journalctl -u snapd' says no entries [11:18] seb128: 'sudo journalctl -u snapd' works [11:18] so, it is a behaviour change in journalctl, but not as bad as i thought it was [11:19] ah, so logging is working, good :) [11:20] Chipaca: perhaps it's just permission difference [11:20] Chipaca: on many distributions journalctl is locked down [11:20] zyga: yes, but you get an error when you try [11:20] not a 'no entries' [11:20] Chipaca: is it possible that you just reach the per-user instance of journalctl ? [11:21] ¯\_(ツ)_/¯ i have no idea [11:21] i'm trying to debug wacky interactions with snapd elsewhere in this [11:21] journalctl -u snapd works on disco here, without sudo [11:21] not figure out why systemd changed the rules again [11:21] but it's not a new install [11:21] so maybe some permission is not correctly set or something [11:22] Chipaca, does it work for other units? like gdm or upower? [11:22] seb128: no [11:23] Chipaca, and "systemctl --system -u snapd"? [11:23] insufficient permissions [11:23] k [11:23] so that's the issue [11:23] probably worth talking to xnox about and/or reporting on launchpad against systemd [11:24] que?! [11:24] xnox, is it normal that journalctl on disco new installs require sudo to access the system logs? [11:24] what do you expect for "systemctl --system -u snapd" to do? there is no verb, maybe "journalctl -u snapd" ? [11:24] (it doesn't on my upgraded system) [11:25] xnox, [11:25] seb128: 'journalctl -u snapd' says no entries [11:25] seb128: 'sudo journalctl -u snapd' works [11:25] seb128, it depends which groups the person is in. and what "new install" is - container, desktop, server? [11:25] xnox, I think people see the problem on disco desktop/ubiquity install [11:25] one does need to be like in sudo group, or in adm, or somewhere like that. [11:25] hmmm [11:25] on desktop, the first user should be able to read all the logs i think. [11:26] what group is that you need? [11:26] checking [11:26] i believe it was `adm` group, but that's from memory, looking at code [11:26] Chipaca, what groups is your user in? does it include adm? [11:28] and $ sudo getfacl /var/log/journal/ [11:30] this has possibly regressed.... because i don't see us configuring adm group [11:30] Chipaca, ^ [11:31] user is in adm cdrom sudo dip plugdev lpadmin sambashare [11:33] seb128: https://pastebin.ubuntu.com/p/6rg6tGVMJY/ [11:34] zyga: fun observation - docker.io deb package has a shell defer in debian/tests/common [11:34] mvo: oh, would you mind pasting it? [11:34] I wonder how they implemented it [11:34] zyga: common [11:35] systemd-journal systemd-timesync systemd-network systemd-resolve systemd-coredump [11:35] zyga: http://paste.ubuntu.com/p/dw3kKtzPJq/ [11:35] huh, xenial also has 'em [11:35] mvo: interesting, thank you! [11:35] Chipaca, xnox, on my upgraded system there is a "group:adm:r-x" extra line compared to that [11:35] Chipaca, can you file it on https://bugs.launchpad.net/ubuntu/+source/systemd/+filebug ? [11:36] I guess something for xnox to poke at [11:36] Chipaca, and like doing $ sudo setfacl -nm g:adm:rx,d:g:adm:rx /var/log/journal/ [11:36] doesn't seem to help [11:36] oh, maybe i need to restart journald [11:36] Hi jdstrand how is it going? I am officialy appointed to strict confinement of microk8s! Going through the instructions/patches gathered, i am facing problems. I am now focusing on kube-proxy. Can you spare some time to get me upto speed with your work? [11:37] xnox: do you need the bug? [11:37] Chipaca, yes, please [11:37] what's the name for the user that gets created on install? [11:37] jdstrand: I see you have pushed the interfaces since early November but I wonder if i am missing something [11:37] 'default user'? [11:38] that should be good enough [11:38] I don't think we have a defined wording, default/first/... [11:39] Chipaca, can you fix it with: [11:39] jdstrand: this is how kubeproxy is failing: https://pastebin.ubuntu.com/p/cN3sK5gpX8/ although I request for iptables proxy setup it falls back to userspace [11:39] $ sudo setfacl -R -nm g:adm:rx,d:g:adm:rx /var/log/journal [11:39] ? [11:41] mwhudson: did anything in the go snap changed around 2019-04-03? [11:41] willcooke, ^ in case you didn't follow, snapd logging works, journal just regressed and the default user doesn't have the permission to read it without using sudo (workaround, 'sudo journalctl -u snapd' if you need some log) [11:42] ddstreet: I ran autopkgtest a couple of times on my machine for cosmic no luck to reproduce the issue so far. trying with -smp 1 now just to see if that makes a difference [11:43] mvo: there was a point release around then i think? [11:43] ddstreet: also on our side nothing should change, snapd bundles all go deps so it must be some package or other change, I wish there was a way to query what packages changed [11:44] mvo: ah no that was more recent, there were updates on 2019-03-14 and 2019-04-08 [11:44] mwhudson: its probably just coincidence, I'm just looking at some autopkgtest issue and it seems to have started around this time (and we use go snap during adt to build spread - but probably unreleated, sorry, I'm stabing a bit in the dark right now) [11:45] mwhudson: cool, thank you [11:45] * mvo rules that out then [11:48] xnox: yes === ricab is now known as ricab|lunch [11:51] xnox: seb128: https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1824342 [11:51] Bug #1824342: in 19.04, default user cannot access system journal [11:51] hth [11:51] Chipaca, thx! [12:08] Chipaca, seb128 - this smells like a regression in systemd-tmpfiles utility implementation... bisecting. [12:15] popey: willcooke: figured out #1824188 [12:15] Bug #1824188: Software tab is empty on clean 19.04 install [12:19] interesting how 19.04 lets you log in before seed.loaded is done [12:20] mvo: is that expected? [12:21] it's a bit unfortunate that snapd is not ready by the time the graphical session starts [12:22] seb128: well, the service that waits for seeded does have a WantedBy=multi-user.target [12:22] seb128: so it shouldn't [12:24] ah but it doesn't have an Before/After thing? [12:32] mvo: WDYT of having snapd.seeded.service have Before=multi-user.target ? (it alreayd has WantedBy=) [12:33] seb128: that would slow down first boot considerably, though, which i'm sure would make xnox happy [12:34] Chipaca, allowing login is unrelated to reaching multi-user.target. [12:34] seb128, could we have g-i-s wait until snapd is done seeding? [12:34] xnox: oh? oh. [12:34] there was a separate job that removes the nologin flag [12:34] willcooke, and have your desktop sitting there doing nothing for like 3 min or slow systems? [12:35] it's quite early, in like either logind or getty starts [12:35] willcooke, I would prefer not [12:35] seb128, well, the desktop could be up and working, just that g-i-s doesn't start right away; [12:35] Chipaca, did snapd change/is that a core18/seeding change side effect? [12:35] s/;/? [12:35] mvo: hey I need to take my son to the doctor, I'll try to be back for standup [12:35] seb128: no [12:35] seb128: sorry, no to the first one [12:36] we only started having those "things are not ready under seeding is done which takes a while" this cycle it looks like :/ [12:36] seb128: to the second one, maybe? as i say, it's racy, so maybe before you were always not hitting the restart [12:36] seb128: snapd has always restarted, on ubuntu, after first install of core [12:36] willcooke, I think it's still suck as an user experience, boot the system, have it clean, start some apps, start working and then the welcome thing pops up [12:36] yeah [12:37] two things [12:37] 1. retrying would be fine, nothing wrong with it [12:37] 2. i don't know what changes with the "these things are installed", but the things _aren't installed_ unless seeding is done, so that logic is bogus [12:37] yes, still depends of how long it takes [12:38] I think the issue is mainly that it takes that long to snapd to be "ready" [12:38] any way we work around that from the desktop will be suboptimal [12:38] either we need to delay the welcome screen [12:38] it's always going to take non-zero time to seed [12:38] or have it to display spinner for $minutes on the software page [12:38] so the race will always be there if you don't handle it [12:38] yes [12:38] but seconds it's fine [12:39] desktop takes like 10 seconds+ to load [12:39] you probably have 30 seconds before an user hit that wizard page [12:39] well, that's part of the problem [12:39] it does not hit the query when it hits that wizard page [12:39] it hits it at the beginning [12:40] that we can easily change [12:40] seb128: if you wait on first page of the wizard until seeding is done (watch it in a terminal), the last page is still blank (with the error in syslog) [12:40] seb128: could you 'busy' the next button on the before-last page of the wizard? [12:41] Chipaca, that we should fix yes [12:41] until snapd is sod? [12:41] and then do the query [12:41] it has been an issue for no user in bionic though [12:41] so it's an indications things used to be ready much earlier from the snapd side [12:41] i could reproduce what i've done here in bionic and tell you exactly why it's not been an issue [12:41] it was reliably ready before the desktop load and that stopped being true [12:41] (our code didn't change) [12:42] seb128: it could be earlier _or later_ [12:42] to be clear the 'restart' is not the last thing that happens for seeding [12:42] more like the first thing [12:43] seb128: tbh, it's probably 19.04 being so amazingly fast at starting up [12:43] not sure what changed but you get to log in very fast [12:43] and that's awesome [12:43] but it catches snapd with its pants down [12:43] I see [12:44] Chipaca, willcooke, let's fix g-i-s to do the query when it loads the tab (if that's doable without refactoring the code a lot, we need to check) and see if that's enough [12:44] then we can try to do the "retry if not ready and spin" [12:44] that should be good enough, unless it spins then for 3 minutes [12:44] seb128, sounds like a plan, thanks. Shall I ask ken to look? [12:45] willcooke, wfm, we can also ask Andy [12:45] seb128, oki, let's move to -desktop and work from there [12:45] thanks Chipaca [12:45] Chipaca, thx [12:46] seb128: in this kvm it took snapd 95 seconds from startup to seeded, and the query from g-i-s arrived at 30s [12:46] so it is a full minute later [12:46] :( [12:46] that's a sucky user experience still :/ [12:46] willcooke, ^ [12:47] ouch [12:47] perhaps if it's not ready we just have to skip that page [12:47] while being less than ideal, no page is better than a blank page [12:47] willcooke, well, then we are going to skip it for most users :/ [12:48] cachio: good luck [12:48] we maybe just need to kill it [12:48] and do the hint on the launcher icon for gnome-software mp_t recommended [12:48] lets talk in -desktop [12:48] but that's not for disco at this point [12:48] k [12:48] yeah [12:50] willcooke: FWIW, http://paste.ubuntu.com/p/8rpHjxBmkq/ [12:50] although that's more for us than for you, it might shed some light on it all [12:51] Chipaca: 23.382s - Make snap "core" (6673) available to the system , that is interesting and weird [12:52] pedronis: even more [12:52] Apr 11 12:03:23 ubuntu snapd[620]: taskrunner.go:426: DEBUG: Running task 7 on Do: Make snap "core" (6673) available to the system [12:52] Apr 11 12:03:46 ubuntu snapd[620]: task.go:337: DEBUG: 2019-04-11T12:03:46+01:00 INFO Requested daemon restart. [12:52] PR #11: Publish coverage reports to coveralls [12:52] pedronis: ^ that's the majority of that time right there [12:53] pedronis: (those lines are consecutive in the logs) [12:54] * Chipaca hugs elopio [12:54] thanks Chipaca [12:54] willcooke: that's the output of 'sudo snap debug timings 1', which gives you the per-task times of the seed change [12:55] very handy, thx [12:55] newer snapd will show even more info [12:56] Chipaca: anyway sounds we need to dig there at some point, because link-snap is not supposed to a particularly slow one [12:56] especially for core that doesn't have services or apps [13:01] Chipaca: standup? [13:01] omw === ricab|lunch is now known as ricab [13:44] mborzecki: mvo: btw I did a PR to simplify prepare-image (not high prio): https://github.com/snapcore/snapd/pull/6696 [13:44] PR #6696: image: simplify prefer local logic and fixes [13:45] pedronis: thanks [13:48] pedronis: thx, will review [13:59] mvo: the change that creates .../aux also changed the postrm rm of cache to -r [13:59] mvo: but if you have a newer snapd and use an older postrm, it'll likely break like that [14:02] mvo: that is a895e537c55a350af30250e5bedc1b16e0c095ab, #6034, which is in 2.38 [14:02] PR #6034: many: save media info when installing, show it when listing [14:21] kjackal: hey, did you connect the kubernetes-support interface? the snap isn't allowed to load modules itself, but kubernetes-support tells snapd to load those ip_* modules. you also need to plugs and connect firewall-control (for the nf_* modules, though that should autoload) [14:22] kjackal: additional, like in journal logs for security policy violations [14:23] ah I probably missed that [14:24] jdstrand: hey [14:24] jdstrand: having fun with getline [14:24] it's a peculiar beast [14:27] Chipaca: i can reproduce the "make snap core available" taking 23s issue on disco install; interestingly it looks all fine if i start with a clean state on my 18.04 test box, does that match your observations? [14:27] kjackal: yeah, when you do an unasserted install, you have to manually connect all the interfaces. once in the store we can issue a snap declaration that autoconnects them [14:28] pstolowski: i have not tested starting without a seed, if that's what you mean [14:28] zyga: interesting. note, this isn't a regression (the previous behavior was the same afaics), so a followup pr would be fine (unless you think otherwise) [14:28] yes [14:28] kernel plays ball so it's okay [14:28] but I want to fix it anyway [14:29] * jdstrand nods [14:29] I may split this depending on the size [14:29] I started by adding sc_error support for mountinfo to really know what's wrong [14:29] parsing with scanf is only good for programming interviews and programming contents [14:34] Chipaca: hi [14:35] I think the issue with the resume for the .partial file is under some conditions [14:35] I could reproduce it many times but on a clean environment it doesn't happen [14:36] Chipaca: then, could you please take a look to this one #6694 ? thanks [14:36] PR #6694: tests: improve how snaps are cached [14:38] cachio: i need to see the conditions :-) [14:44] mvo: is it ok if we start testing snapd on 19.04 on travis? [14:52] cachio: yes please [14:52] Chipaca: yeah, this was exactly the problem, old snpad is purged but fails [14:55] Chipaca: removing snapd on disco, removing state, unmounting everything, reinstalling snapd and letting it seed all the snaps from /var/lib/snapd/seed works fine (no 23s slowness). it's something during install only [15:00] pstolowski: I did a pass over #6704 , some questions there [15:00] PR #6704: overlord/devicestate,snapstate: measurements around ensure and related tasks [15:01] pedronis: ty [15:17] PR snapd#6708 opened: packaging/ubuntu: enable PIE hardening flags <⛔ Blocked> [15:21] zyga, Chipaca I cherry picked 6668 into upstream/2.38 which should hopefully make travis on 2.38 happy again [15:21] mvo: ack, thank you [15:23] jdstrand: I decided to split the getline fix into a new branch because it still needs some work to be properly propsable [15:23] I just pushed the changes you asked for (comments) and will merge when green [15:24] mvo: any idea why it only sometimes failed? [15:24] Chipaca, pedronis the "Make snap core..." slowness comes from regeneration of fc-cache. fc-cache-v6+fc-cache-v7 take at least 17s when i start with clean fontcache [15:24] mvo: is it because the aux directory is only sometimes created? [15:24] pstolowski: aaaah [15:24] pstolowski: ah, so obvious but unpleasant [15:25] the installer could do that before the reboot, with some care [15:26] Chipaca: the installer might come with that cache [15:26] pre-baked [15:26] Chipaca: for me it happened before the reboot, i didn't reboot immediately though as i was investigating it [15:26] pstolowski: wat [15:26] Chipaca: i'd need to double check to be sure [15:26] pstolowski: the snapd 'inside' isn't running before the reboot [15:27] zyga: yes, I think its a race but again, no time to debug in detail [15:30] Chipaca: ah yes you're right, i forgot i'm running the snap from the live cd [15:30] zyga: thanks, I saw. sounds go to me [15:37] pedronis: we could maybe run fc-cache-v6 and v7 at the same time instead of in sequence, that could win a little bit? [15:40] pstolowski: ha, nice idea, especially in parallel [15:40] unless fc-cache itself is multi-threaded [15:41] well 10s vs 20s is still a lot [15:41] is good we understand the problem [15:41] but we should probably talk with the installer people [15:41] what's the best path forward [15:45] pedronis: shall i talk to them (i don't remember who the installer people are though) [15:45] ? [15:46] pstolowski, you want foundations' team for the installer [15:46] seb128: ah, thanks! [15:49] i'm going to add timings around this area anyway, might be useful to have complete picture of first boot experience [15:50] indeed [15:58] * cachio lunch === pstolowski is now known as pstolowski|afk [16:03] mborzecki: could you have another look at https://github.com/snapcore/snapd/pull/6643 [16:03] PR #6643: tests: deny ioctl - TIOCSTI with garbage in high bits [16:03] not sure if it requires more changes [16:21] pstolowski|afk: yes, do add timings around there [16:25] hrm, opensuse 15 is failing in travis at least in 2.38 - is that known? [16:34] does anyone has the curl command handy to revert a snap using the api? [16:43] PR snapd#6709 opened: release: 2.38.1 [16:48] mvo: sru for 2.38 is still needed? [16:48] I see you created sru for 2.38.1 [16:51] cachio: yes please [16:51] cachio: well, its a good question actually [16:51] cachio: yes [16:51] cachio: lets do 2.38 as its already in the queue [16:52] ok [16:52] cachio: it will only be a problem for people with low-mem arm devices that use an old 4.4 kernel before 4.4.78 - all ubuntu kernels are fine [16:53] mvo: nice, I'll start right now [16:53] thanks [16:53] thank you [16:56] mvo: http://people.canonical.com/~ubuntu-archive/proposed-migration/xenial/update_excuses.html#snapd [16:56] tests didn't run for xenial [16:57] cachio: aha, I see, poking [17:00] cachio: I raised in in #ubuntu-release [17:00] cachio: there is no golang-1.10 for powerpc it seems [17:00] mvo: ahh [17:00] ok [17:02] I'm running the other validations [17:02] thanks [17:24] PR snapd#6605 closed: cmd/libsnap,osutil: fix parsing of mountinfo [17:26] PR snapd#6710 opened: tests: run spread tests on ubuntu 19.04 [17:59] cachio: 2.38.1 is in beta now, please validate [18:03] mvo: sure [18:21] PR snapcraft#2532 opened: catkin stage-snaps test: limit to amd64, arm64, and armhf [18:57] * zyga EODs [20:03] PR snapcraft#2532 closed: catkin stage-snaps test: limit to amd64, arm64, and armhf [20:33] PR snapcraft#2533 opened: tests: classic confinement spread tests for and maven