=== epod is now known as luk3yx
=== chihchun_afk is now known as chihchun
=== chihchun is now known as chihchun_afk
ijohnsonplars: canceled the job04:15
plarsijohnson: thanks, feel free to resubmit, just keep that section of the yaml from the example04:15
zygaGood morning05:46
mborzeckizyga: hey06:09
mborzeckimvo: hi06:10
mborzeckimvo: zyga: i'll be out for a while in the morning, sent a message in the forum06:11
mvohey mborzecki06:13
mvomborzecki: no worries, thanks for letting us know06:13
mvozyga: good morning to you as well06:13
zygaHey mvo06:16
zygaMvo some bad news from last evening06:17
zygaMvo: snapd broke in adt06:17
zygaThere is a bug with the details06:17
zygaI’m with the dog outside so no links, sorry06:17
zygaJust sort by snapd but numbers06:17
mvozyga: oh, interessting06:21
mvozyga: reading now, we need to investigate06:23
mvohaving snap changes output would be good06:24
zygamvo: I can look back home06:26
mvozyga: it really looks a bit mysterious, no trace of core there06:29
mvozyga: that sounds more like a bug in out testsuite06:30
dot-tobiasgood morning!06:35
zygamvo: note that it happens outside of the test suite06:38
zygaIt affects docker, for example06:39
zygaHey dot-tobias :-)06:41
dot-tobiasHi zyga 😊06:41
mvozyga: oh, hmmmmm06:42
=== chrisccoulson_ is now known as chrisccoulson
zygaI'm back home; I'll make coffee and see what I can find06:47
zygaback from breakfast07:11
=== pstolowski|afk is now known as pstolowski
zygagood morning pawel!07:12
mvohey pstolowski07:15
mvo6706 needs a review07:15
zygamvo: doing07:18
mvozyga: the indent in 6706 was added to make it easier to read, is that bad in some way?07:44
zygait looks like there is an extra space there07:44
zygaperhaps tabs spaces?07:45
mvozyga: it looks like its just a strange formating of the diff, let me double check with side-by-side view07:45
mvozyga: it looks ok here (unless I miss something) the first "ifneq" is unchagned, maybe this is why it looks strange07:46
zyga https://usercontent.irccloud-cdn.com/file/sqaMmc5M/Screenshot%202019-04-11%20at%2009.47.03.png07:47
zygadoes the line with the + has a space up front?07:47
zygamvo: the logic is wrong07:48
zygayou want ifeq, not ifneq07:48
zygabecause when you have two ifneq lines , one will always match07:48
zygaah, I see, they *are* nested07:48
zygamvo: let me suggest something07:48
mvozyga: hm, let me look. if thats so then the test is also broken07:49
mvozyga: I can change it to use "dpkg-architecture -qDEB_TARGET_ARCH_BITS"07:52
zygamvo: added one more comment07:52
zygamvo: +1 on that idea07:52
zygamvo: on both the test and the build rules07:52
mvozyga: ok, let me look at this07:53
zygamvo: we have one more bug report https://bugs.launchpad.net/snapd/+bug/182424207:56
zygaperhaps something that warrants an SRU?07:57
mupBug #1824242: snapd can't be purged with latest AWS Xenial AMI <cloud-images:New> <snapd:New> <https://launchpad.net/bugs/1824242>07:57
mvozyga: yeah, if that is missing from 2.38 we definitely need to add the cleanup in 2.38.108:00
mvozyga: 2.38.1 will also include 670608:00
zygasorry for the  busy morning08:00
zygaI am looking at the adt issue now08:00
mvozyga: thanks, +108:02
mvoyeah, 2.38 has the right "rm -rf" call08:03
zygalet's update the bug to reflect that 2.38 fixes it08:03
mvozyga: I just did that08:04
pedronismvo: hi, I added this card: https://trello.com/c/tx49EL3F/221-add-black-box-test-testing-memory-mappings-mmap-sizes-and-max-resident-memory-against-decided-limits-budget-for-snapd-and-snap-i08:09
willcookeHi gang.  We're suddenly seeing this bug:  https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/182418808:12
mupBug #1824188: Software tab is empty on clean 19.04 install <amd64> <apport-bug> <disco> <rls-dd-incoming> <gnome-initial-setup (Ubuntu):Confirmed> <https://launchpad.net/bugs/1824188>08:12
willcookeGNOME Initial Setup is getting a "connection reset" from snapd when trying to get the list of software for promotion at the end of set up during first login08:12
willcookeCould you take a look?08:12
willcookeor lend a hand08:12
zygahey willcooke08:13
seb128" Failed to get featured snaps: Failed to read from snapd: Error receiving data: Connection reset by peer"08:13
zygaI think we will have to, there are some more 19.04 reports coming in yesterday08:13
zygaI'll prep a VM as soon as I'm done looking at ADT issues08:13
willcookethanks zyga08:13
seb128I had snap list failing also yesterday, but disk was full so maybe a side effect of that08:13
willcookeI'll add some more logs to the bug08:13
seb128thx zyga08:13
zygathank you!08:13
mvozyga: I updated 6706 - anything new on the ADT issue?08:38
zygamvo: not yet, pulling many things at the same time makes my link slow; just constructed adt image08:40
mvozyga: ok08:41
mvozyga: it was specifically cosmic, right?08:41
zygaI just spawned the test08:41
zygawe'll know shortly08:41
mvohey mborzecki - welcome back08:49
mvomborzecki: want to look at 6706? should be easy08:49
zygaI love hand-holding adt08:51
zyganothing works in that thing08:51
zygaobviously it cannot talk to the network08:51
* zyga is debugging08:51
mborzeckimvo: sure, looking08:52
mborzeckiah that's the PIE thing08:52
zygamvo: do you know how to spawn adt in qemu with bridged network or something that is not just broken?08:54
zygaI used08:55
zygaautopkgtest -s -U snapd_2.38+18.10.dsc -- qemu ./autopkgtest-cosmic-amd64.img08:55
zygathis doesn't work08:55
mvozyga: oh, thats strange, that should work08:55
zygalet me grab some logs08:56
mborzeckimvo: posted some comments there09:08
mborzeckimvo: basically, i think it  would make sense to include s-c in the test and make sure it's built with PIE, because it's C, so it's bad and all that :)09:08
mborzeckizyga: is snap confine poking $SNAP_DATA now? see a new denial in the selinux branch09:48
zygamborzecki: details please09:48
mborzeckitype=AVC msg=audit(1554975937.636:129): avc:  denied  { getattr } for  pid=1099 comm="snap-confine" path="/var/snap/test-snapd-service/x1" dev="vda1" ino=393657 scontext=system_u:system_r:snappy_confine_t:s0 tcontext=system_u:object_r:snappy_var_t:s0 tclass=dir permissive=109:48
mborzeckizyga: ^09:48
mborzeckiha, maybe it's the cwd changes in snap-confine09:50
zygamborzecki: so, snap-confine always did that09:51
mvomborzecki: sounds good, go for it09:51
mvomborzecki: or in a separate PR, no strong opinion09:51
zygamborzecki: perhaps it is a combination of two things:09:51
zygathis being a service09:51
mborzeckizyga: yes, fstat probably09:51
zygaso HOME is set to $SNAP_DATA09:51
zygaand the cwd changes09:51
zygasince apparmor does not mediate fstat at all09:52
zygait was not showing up09:52
mborzeckizyga: i see we set WorkingDirectory for generated services to $SNAP_DATA09:52
zygawhy did it not show up in the selinux test?09:52
mborzeckiso that's probably it09:52
zygamborzecki: I agree09:52
zygamborzecki: is the selinux test capable of seeing the denials now?09:53
zygamborzecki: was it one test or a restore-time check in all tests?09:53
mborzeckizyga: intersting how this test will be able to detect this :)09:53
mborzeckizyga: it's the selinux-clean test, specificaly targeted to catch any denials that may come up09:54
zygaI see09:54
zygaperhaps we should move the check to post-restore stage09:54
zygalike we do with apparmor, I believe09:54
mborzeckizyga: yeah, seems like we'll be able to update the policy proactively now, rather than waiting for bug reports in rhbz09:55
zygapopey: on the 19.04 opengl issue, as you know I asked for some help on twitter and got a very useful response09:59
popeyGood to hear twitter isn't just for nazis :D10:00
zygapopey: I will look at what the situation is inside a virtual 19.04 system in the context of a separate issue that was raised by willcooke today; after that I should be able to make some progress towards understanding the cause of the opengl regression10:00
popeyThanks for letting me know!10:00
zygapopey: arguably that was even more useful than insta-ordering a gtx on amazon10:01
zygabecause I got feedback from more diverse collection of systems10:02
popeyI mean, okay, but if we had some QA on these GTX systems *before* I found the issue, that argument doesn't hold.10:02
zygaWe don't do QA like that I'm afraid10:08
zygabut we also know about the shortcomings of the solution we have now10:08
zygaand we have the improvements in the roadmap10:08
zygafor this cycle (though arguably we will be late)10:08
zygaI'm saying that the wheels started spinning already10:09
zygamborzecki: some failures from autopkgtest in qemu:10:10
zygaperhaps insufficient mocking?10:10
zygarunning more tests10:11
mborzeckihm maybe10:12
mborzeckibtw. there's no 18.04-32 in the spread suite?10:12
mborzeckizyga: the nvidia card is in an older system i have here, i can look into 19.04 later on if i manage to install it on a usb stick10:15
zygamborzecki: I have useful data already, I need to process it first10:15
mborzeckizyga: ah, that's fine then10:15
mborzeckizyga: has anything changed with the libs again?10:16
zyganot sure yet10:16
pstolowskipedronis: hey, toughts on snap debug timings --ensure=... output: https://paste.ubuntu.com/p/dXZqyDxVG6/ ?10:17
pedronispstolowski: thx, need to think a bit. I will look at the PR producing the data today10:19
zygaChipaca: hey, are you looking at the 19.04 empty software tab bug?10:41
Chipacazyga: ye10:41
zygagreat, thanks!10:41
Chipacawhy is journalctl still broken on 19.04?10:48
Chipacastuff is going to /var/log/syslog skipping journalctl entirely10:49
Chipacathis is a breaking change :-(10:49
zygathat's odd10:49
mborzeckibtw. 19.04 beta ok to use for install or should i rather try a daily one?10:50
zygaI downloaded daily but did not install it yet10:50
mvozyga: anything new on the ADT issue? I ran it in qemu 3 times and no failure :/10:50
zygasame here, more iterations10:50
zygaI updated the bug report a moment ago10:50
Chipacawillcooke: why is journalctl not showing logs on 19.04, do you know?10:52
willcookeChipaca, we had this problem before when we were trying to work out why it wasnt installing some snaps.  I don't think we ever got to the bottom of it.  It seems that it was just snapd that wasn't logging properly, even with the logging turned right up.  Was it that it was just taking a while to get flushed to the log?10:55
Chipacawillcooke: the logs are in /var/log/syslog, immediately, but journalctl always reports empty10:56
willcookeseb128, any ideas? ^10:56
mupPR snapd#6706 closed: ubuntu: disable -buildmode=pie on armhf to fix memory issue <⚠ Critical> <Created by mvo5> <Merged by mvo5> <https://github.com/snapcore/snapd/pull/6706>11:04
mvoI cherry picked 6706 now for 2.3811:05
zygamvo: will you also tackle the PIE of snap-confine?11:05
mvozyga: not right now, also not as part of 2.38.1 - no time today. but feel free to grab it (or maybe mborzecki)11:06
seb128Chipaca, willcooke, I guess snapd doesn't use the proper logging api/could integrate to the journal? a problem on the snapd side for sure11:11
zygaseb128: how does that explain difference between 19.04 and prior releases?11:11
Chipacaseb128: ? how can it be "a problem on the snapd side for sure" when the exact whatever-it-is-snapd-does works on anything other than 19.04?11:11
seb128Chipaca, like you are sure it works on fedora 30 or other distros with the same systemd stack?11:12
mvoddstreet: re failing autopkgtest - do you have a machine around that you can ssh into that has the failure? I would love to see the "snap changes" output of it11:13
seb128zyga, I don't know what snapd is doing, maybe it's relying on old syslog and should integrate better with systemd journal?11:13
seb128well, I guess it could be an issue on the systemd side11:13
mvoddstreet: fwiw, it looks like things started to fail at 2019-04-0311:13
zygaseb128: I see, perhaps the question is: how should snapd log?11:13
seb128it should integrate to journalctl11:13
seb128it could be that it's doing things in a "legacy" way and that regressed in the journal side so a bug11:14
seb128anyway, I think that needs looking at by someone who understand the snapd logging code and what is done exactly11:14
ddstreetmvo, let me see if i can repro it in a system on the canonical vpn11:14
Chipacasnapd, and all snaps that have daemons, rely on the behaviour that a service printing to stdout/stderr ends up in the journal11:14
Chipacaseb128: ^11:15
seb128Chipaca, snapd re-exec itself? could that confuse the systemd units or something11:15
seb128Chipaca, zyga, in any case it's an issue specific to snapd so unsure what it's doing differently but imho the best person to debug that is someone who knows what snapd is doing exactly11:16
seb128or maybe talk to foundation if you believe the issue is on the journal side11:16
seb128but they are probably going to need a testcase/details on what snapd is doing and how it's different from other services that work fine11:17
Chipacaok, the problem is different11:17
Chipacaseb128: 'journalctl -u snapd' says no entries11:17
Chipacaseb128: 'sudo journalctl -u snapd' works11:18
Chipacaso, it is a behaviour change in journalctl, but not as bad as i thought it was11:18
seb128ah, so logging is working, good :)11:19
zygaChipaca: perhaps it's just permission difference11:20
zygaChipaca: on many distributions journalctl is locked down11:20
Chipacazyga: yes, but you get an error when you try11:20
Chipacanot a 'no entries'11:20
zygaChipaca: is it possible that you just reach the per-user instance of journalctl ?11:20
Chipaca¯\_(ツ)_/¯ i have no idea11:21
Chipacai'm trying to debug wacky interactions with snapd elsewhere in this11:21
seb128journalctl -u snapd works on disco here, without sudo11:21
Chipacanot figure out why systemd changed the rules again11:21
seb128but it's not a new install11:21
seb128so maybe some permission is not correctly set or something11:21
seb128Chipaca, does it work for other units? like gdm or upower?11:22
Chipacaseb128: no11:22
seb128Chipaca, and "systemctl --system -u snapd"?11:23
Chipacainsufficient permissions11:23
seb128so that's the issue11:23
seb128probably worth talking to xnox about and/or reporting on launchpad against systemd11:23
seb128xnox, is it normal that journalctl on disco new installs require sudo to access the system logs?11:24
xnoxwhat do you expect for "systemctl --system -u snapd" to do? there is no verb, maybe "journalctl -u snapd" ?11:24
seb128(it doesn't on my upgraded system)11:24
seb128<Chipaca> seb128: 'journalctl -u snapd' says no entries11:25
seb128 seb128: 'sudo journalctl -u snapd' works11:25
xnoxseb128, it depends which groups the person is in. and what "new install" is - container, desktop, server?11:25
seb128xnox, I think people see the problem on disco desktop/ubiquity install11:25
xnoxone does need to be like in sudo group, or in adm, or somewhere like that.11:25
xnoxon desktop, the first user should be able to read all the logs i think.11:25
seb128what group is that you need?11:26
xnoxi believe it was `adm` group, but that's from memory, looking at code11:26
seb128Chipaca, what groups is your user in? does it include adm?11:26
xnoxand $ sudo getfacl /var/log/journal/11:28
xnoxthis has possibly regressed.... because i don't see us configuring adm group11:30
seb128Chipaca, ^11:30
Chipacauser is in adm cdrom sudo dip plugdev lpadmin sambashare11:31
Chipacaseb128: https://pastebin.ubuntu.com/p/6rg6tGVMJY/11:33
mvozyga: fun observation - docker.io deb package has a shell defer in debian/tests/common11:34
zygamvo: oh, would you mind pasting it?11:34
zygaI wonder how they implemented it11:34
mvozyga: common11:34
Chipacasystemd-journal systemd-timesync systemd-network systemd-resolve systemd-coredump11:35
mvozyga: http://paste.ubuntu.com/p/dw3kKtzPJq/11:35
Chipacahuh, xenial also has 'em11:35
zygamvo: interesting, thank you!11:35
seb128Chipaca, xnox, on my upgraded system there is a "group:adm:r-x" extra line compared to that11:35
seb128Chipaca, can you file it on https://bugs.launchpad.net/ubuntu/+source/systemd/+filebug ?11:35
seb128I guess something for xnox to poke at11:36
xnoxChipaca, and like doing $ sudo setfacl -nm g:adm:rx,d:g:adm:rx /var/log/journal/11:36
xnoxdoesn't seem to help11:36
xnoxoh, maybe i need to restart journald11:36
kjackalHi jdstrand how is it going? I am officialy appointed to strict confinement of microk8s! Going through the instructions/patches gathered, i am facing problems. I am now focusing on kube-proxy. Can you spare some time to get me upto speed with your work?11:36
Chipacaxnox: do you need the bug?11:37
xnoxChipaca, yes, please11:37
Chipacawhat's the name for the user that gets created on install?11:37
kjackaljdstrand: I see you have pushed the interfaces since early November but I wonder if i am missing something11:37
Chipaca'default user'?11:37
seb128that should be good enough11:38
seb128I don't think we have a defined wording, default/first/...11:38
xnoxChipaca, can you fix it with:11:39
kjackaljdstrand: this is how kubeproxy is failing: https://pastebin.ubuntu.com/p/cN3sK5gpX8/ although I request for iptables proxy setup it falls back to userspace11:39
xnox$ sudo setfacl -R -nm g:adm:rx,d:g:adm:rx /var/log/journal11:39
mvomwhudson: did anything in the go snap changed around 2019-04-03?11:41
seb128willcooke, ^ in case you didn't follow, snapd logging works, journal just regressed and the default user doesn't have the permission to read it without using sudo (workaround, 'sudo journalctl -u snapd' if you need some log)11:41
mvoddstreet: I ran autopkgtest a couple of times on my machine for cosmic no luck to reproduce the issue so far. trying with -smp 1 now just to see if that makes a difference11:42
mwhudsonmvo: there was a point release around then i think?11:43
mvoddstreet: also on our side nothing should change, snapd bundles all go deps so it must be some package or other change, I wish there was a way to query what packages changed11:43
mwhudsonmvo: ah no that was more recent, there were updates on 2019-03-14 and 2019-04-0811:44
mvomwhudson: its probably just coincidence, I'm just looking at some autopkgtest issue and it seems to have started around this time (and we use go snap during adt to build spread - but probably unreleated, sorry, I'm stabing a bit in the dark right now)11:44
mvomwhudson: cool, thank you11:45
* mvo rules that out then11:45
Chipacaxnox: yes11:48
=== ricab is now known as ricab|lunch
Chipacaxnox: seb128: https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/182434211:51
mupBug #1824342: in 19.04, default user cannot access system journal <systemd (Ubuntu):New> <https://launchpad.net/bugs/1824342>11:51
seb128Chipaca, thx!11:51
xnoxChipaca, seb128 - this smells like a regression in systemd-tmpfiles utility implementation... bisecting.12:08
Chipacapopey: willcooke: figured out #182418812:15
mupBug #1824188: Software tab is empty on clean 19.04 install <amd64> <apport-bug> <disco> <rls-dd-incoming> <gnome-initial-setup (Ubuntu):Confirmed> <snapd (Ubuntu):Invalid> <https://launchpad.net/bugs/1824188>12:15
Chipacainteresting how 19.04 lets you log in before seed.loaded is done12:19
Chipacamvo: is that expected?12:20
seb128it's a bit unfortunate that snapd is not ready by the time the graphical session starts12:21
Chipacaseb128: well, the service that waits for seeded does have a WantedBy=multi-user.target12:22
Chipacaseb128: so it shouldn't12:22
Chipacaah but it doesn't have an Before/After thing?12:24
Chipacamvo: WDYT of having snapd.seeded.service have Before=multi-user.target ? (it alreayd has WantedBy=)12:32
Chipacaseb128: that would slow down first boot considerably, though, which i'm sure would make xnox happy12:33
xnoxChipaca, allowing login is unrelated to reaching multi-user.target.12:34
willcookeseb128, could we have g-i-s wait until snapd is done seeding?12:34
Chipacaxnox: oh? oh.12:34
xnoxthere was a separate job that removes the nologin flag12:34
seb128willcooke, and have your desktop sitting there doing nothing for like 3 min or slow systems?12:34
xnoxit's quite early, in like either logind or getty starts12:35
seb128willcooke, I would prefer not12:35
willcookeseb128, well, the desktop could be up and working, just that g-i-s doesn't start right away;12:35
seb128Chipaca, did snapd change/is that a core18/seeding change side effect?12:35
cachiomvo: hey I need to take my son to the doctor, I'll try to be back for standup12:35
Chipacaseb128: no12:35
Chipacaseb128: sorry, no to the first one12:35
seb128we only started having those "things are not ready under seeding is done which takes a while" this cycle it looks like :/12:36
Chipacaseb128: to the second one, maybe? as i say, it's racy, so maybe before you were always not hitting the restart12:36
Chipacaseb128: snapd has always restarted, on ubuntu, after first install of core12:36
seb128willcooke, I think it's still suck as an user experience, boot the system, have it clean, start some apps, start working and then the welcome thing pops up12:36
Chipacatwo things12:37
Chipaca1. retrying would be fine, nothing wrong with it12:37
Chipaca2. i don't know what changes with the "these things are installed", but the things _aren't installed_ unless seeding is done, so that logic is bogus12:37
seb128yes, still depends of how long it takes12:37
seb128I think the issue is mainly that it takes that long to snapd to be "ready"12:38
seb128any way we work around that from the desktop will be suboptimal12:38
seb128either we need to delay the welcome screen12:38
Chipacait's always going to take non-zero time to seed12:38
seb128or have it to display spinner for $minutes on the software page12:38
Chipacaso the race will always be there if you don't handle it12:38
seb128but seconds it's fine12:38
seb128desktop takes like 10 seconds+ to load12:39
seb128you probably have 30 seconds before an user hit that wizard page12:39
Chipacawell, that's part of the problem12:39
Chipacait does not hit the query when it hits that wizard page12:39
Chipacait hits it at the beginning12:39
seb128that we can easily change12:40
Chipacaseb128: if you wait on first page of the wizard until seeding is done (watch it in a terminal), the last page is still blank (with the error in syslog)12:40
Chipacaseb128: could you 'busy' the next button on the before-last page of the wizard?12:40
seb128Chipaca, that we should fix yes12:41
Chipacauntil snapd is sod?12:41
Chipacaand then do the query12:41
seb128it has been an issue for no user in bionic though12:41
seb128so it's an indications things used to be ready much earlier from the snapd side12:41
Chipacai could reproduce what i've done here in bionic and tell you exactly why it's not been an issue12:41
seb128it was reliably ready before the desktop load and that stopped being true12:41
seb128(our code didn't change)12:41
Chipacaseb128: it could be earlier _or later_12:42
Chipacato be clear the 'restart' is not the last thing that happens for seeding12:42
Chipacamore like the first thing12:42
Chipacaseb128: tbh, it's probably 19.04 being so amazingly fast at starting up12:43
Chipacanot sure what changed but you get to log in very fast12:43
Chipacaand that's awesome12:43
Chipacabut it catches snapd with its pants down12:43
seb128I see12:43
seb128Chipaca, willcooke, let's fix g-i-s to do the query when it loads the tab (if that's doable without refactoring the code a lot, we need to check) and see if that's enough12:44
seb128then we can try to do the "retry if not ready and spin"12:44
seb128that should be good enough, unless it spins then for 3 minutes12:44
willcookeseb128, sounds like a plan, thanks.  Shall I ask ken to look?12:44
seb128willcooke, wfm, we can also ask Andy12:45
willcookeseb128, oki, let's move to -desktop and work from there12:45
willcookethanks Chipaca12:45
seb128Chipaca, thx12:45
Chipacaseb128: in this kvm it took snapd 95 seconds from startup to seeded, and the query from g-i-s arrived at 30s12:46
Chipacaso it is a full minute later12:46
seb128that's a sucky user experience still :/12:46
seb128willcooke, ^12:46
willcookeperhaps if it's not ready we just have to skip that page12:47
willcookewhile being less than ideal, no page is better than a blank page12:47
seb128willcooke, well, then we are going to skip it for most users :/12:47
mvocachio: good luck12:48
seb128we maybe just need to kill it12:48
seb128and do the hint on the launcher icon for gnome-software mp_t recommended12:48
willcookelets talk in -desktop12:48
seb128but that's not for disco at this point12:48
Chipacawillcooke: FWIW, http://paste.ubuntu.com/p/8rpHjxBmkq/12:50
Chipacaalthough that's more for us than for you, it might shed some light on it all12:50
pedronisChipaca:  23.382s            -  Make snap "core" (6673) available to the system , that is interesting and weird12:51
Chipacapedronis: even more12:52
ChipacaApr 11 12:03:23 ubuntu snapd[620]: taskrunner.go:426: DEBUG: Running task 7 on Do: Make snap "core" (6673) available to the system12:52
ChipacaApr 11 12:03:46 ubuntu snapd[620]: task.go:337: DEBUG: 2019-04-11T12:03:46+01:00 INFO Requested daemon restart.12:52
mupPR #11: Publish coverage reports to coveralls <Created by elopio> <Merged by chipaca> <https://github.com/snapcore/snapd/pull/11>12:52
Chipacapedronis: ^ that's the majority of that time right there12:52
Chipacapedronis: (those lines are consecutive in the logs)12:53
* Chipaca hugs elopio 12:54
willcookethanks Chipaca12:54
Chipacawillcooke: that's the output of 'sudo snap debug timings 1', which gives you the per-task times of the seed change12:54
willcookevery handy, thx12:55
pedronisnewer snapd will show even more info12:55
pedronisChipaca: anyway sounds we need to dig there at some point, because link-snap is not supposed to a particularly slow one12:56
pedronisespecially for core that doesn't have services or apps12:56
pedronisChipaca: standup?13:01
=== ricab|lunch is now known as ricab
pedronismborzecki: mvo: btw I did a PR to simplify prepare-image (not high prio):  https://github.com/snapcore/snapd/pull/669613:44
mupPR #6696: image: simplify prefer local logic  and fixes <Created by pedronis> <https://github.com/snapcore/snapd/pull/6696>13:44
mvopedronis: thanks13:45
mborzeckipedronis: thx, will review13:48
Chipacamvo: the change that creates .../aux also changed the postrm rm of cache to -r13:59
Chipacamvo: but if you have a newer snapd and use an older postrm, it'll likely break like that13:59
Chipacamvo: that is a895e537c55a350af30250e5bedc1b16e0c095ab, #6034, which is in 2.3814:02
mupPR #6034: many: save media info when installing, show it when listing <Squash-merge> <Created by chipaca> <Merged by chipaca> <https://github.com/snapcore/snapd/pull/6034>14:02
jdstrandkjackal: hey, did you connect the kubernetes-support interface? the snap isn't allowed to load modules itself, but kubernetes-support tells snapd to load those ip_* modules. you also need to plugs and connect firewall-control (for the nf_* modules, though that should autoload)14:21
jdstrandkjackal: additional, like in journal logs for security policy violations14:22
kjackalah I probably missed that14:23
zygajdstrand: hey14:24
zygajdstrand: having fun with getline14:24
zygait's a peculiar beast14:24
pstolowskiChipaca: i can reproduce the "make snap core available" taking 23s issue on disco install; interestingly it looks all fine if i start with a clean state on my 18.04 test box, does that match your observations?14:27
jdstrandkjackal: yeah, when you do an unasserted install, you have to manually connect all the interfaces. once in the store we can issue a snap declaration that autoconnects them14:27
Chipacapstolowski: i have not tested starting without a seed, if that's what you mean14:28
jdstrandzyga: interesting. note, this isn't a regression (the previous behavior was the same afaics), so a followup pr would be fine (unless you think otherwise)14:28
zygakernel plays ball so it's okay14:28
zygabut I want to fix it anyway14:28
* jdstrand nods14:29
zygaI may split this depending on the size14:29
zygaI started by adding sc_error support for mountinfo to really know what's wrong14:29
zygaparsing with scanf is only good for programming interviews and programming contents14:29
cachioChipaca: hi14:34
cachioI think the issue with the resume for the .partial file is under some conditions14:35
cachioI could reproduce it many times but on a clean environment it doesn't happen14:35
cachioChipaca: then, could you please take a look to this one #6694 ? thanks14:36
mupPR #6694: tests: improve how snaps are cached <Created by sergiocazzolato> <https://github.com/snapcore/snapd/pull/6694>14:36
Chipacacachio: i need to see the conditions :-)14:38
cachiomvo: is it ok if we start testing snapd on 19.04 on travis?14:44
mvocachio: yes please14:52
mvoChipaca: yeah, this was exactly the problem, old snpad is purged but fails14:52
pstolowskiChipaca: removing snapd on disco, removing state, unmounting everything, reinstalling snapd and letting it seed all the snaps from /var/lib/snapd/seed works fine (no 23s slowness). it's something during install only14:55
pedronispstolowski: I did a pass over #6704 , some questions there15:00
mupPR #6704: overlord/devicestate,snapstate: measurements around ensure and related tasks <Created by stolowski> <https://github.com/snapcore/snapd/pull/6704>15:00
pstolowskipedronis: ty15:01
mupPR snapd#6708 opened: packaging/ubuntu: enable PIE hardening flags <⛔ Blocked> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6708>15:17
mvozyga, Chipaca I cherry picked 6668 into upstream/2.38 which should hopefully make travis on 2.38 happy again15:21
zygamvo: ack, thank you15:21
zygajdstrand: I decided to split the getline fix into a new branch because it still needs some work to be properly propsable15:23
zygaI just pushed the changes you asked for (comments) and will merge when green15:23
zygamvo: any idea why it only sometimes failed?15:24
pstolowskiChipaca, pedronis the "Make snap core..." slowness comes from regeneration of fc-cache. fc-cache-v6+fc-cache-v7 take at least 17s when i start with clean fontcache15:24
zygamvo: is it because the aux directory is only sometimes created?15:24
zygapstolowski: aaaah15:24
pedronispstolowski: ah,  so obvious but unpleasant15:24
Chipacathe installer could do that before the reboot, with some care15:25
zygaChipaca: the installer might come with that cache15:26
pstolowskiChipaca: for me it happened before the reboot, i didn't reboot immediately though as i was investigating it15:26
Chipacapstolowski: wat15:26
pstolowskiChipaca: i'd need to double check to be sure15:26
Chipacapstolowski: the snapd 'inside' isn't running before the reboot15:26
mvozyga: yes, I think its a race but again, no time to debug in detail15:27
pstolowskiChipaca: ah yes you're right, i forgot i'm running the snap from the live cd15:30
jdstrandzyga: thanks, I saw. sounds go to me15:30
pstolowskipedronis: we could maybe run fc-cache-v6 and v7 at the same time instead of in sequence, that could win a little bit?15:37
zygapstolowski: ha, nice idea, especially in parallel15:40
zygaunless fc-cache itself is multi-threaded15:40
pedroniswell 10s vs 20s is still a lot15:41
pedronisis good we understand the problem15:41
pedronisbut we should probably talk with the installer people15:41
pedroniswhat's the best path forward15:41
pstolowskipedronis: shall i talk to them (i don't remember who the installer people are though)15:45
seb128pstolowski, you want foundations' team for the installer15:46
pstolowskiseb128: ah, thanks!15:46
pstolowskii'm going to add timings around this area anyway, might be useful to have complete picture of first boot experience15:49
* cachio lunch15:58
=== pstolowski is now known as pstolowski|afk
zygamborzecki: could you have another look at https://github.com/snapcore/snapd/pull/664316:03
mupPR #6643: tests: deny ioctl - TIOCSTI with garbage in high bits <Created by zyga> <https://github.com/snapcore/snapd/pull/6643>16:03
zyganot sure if it requires more changes16:03
pedronispstolowski|afk: yes, do add timings around there16:21
mvohrm, opensuse 15 is failing in travis at least in 2.38 - is that known?16:25
mvodoes anyone has the curl command handy to revert a snap using the api?16:34
mupPR snapd#6709 opened: release: 2.38.1 <Created by mvo5> <https://github.com/snapcore/snapd/pull/6709>16:43
cachiomvo: sru for 2.38 is still needed?16:48
cachioI see you created sru for 2.38.116:48
mvocachio: yes please16:51
mvocachio: well, its a good question actually16:51
mvocachio: yes16:51
mvocachio: lets do 2.38 as its already in the queue16:51
mvocachio: it will only be a problem for people with low-mem arm devices that use an old 4.4 kernel before 4.4.78 - all ubuntu kernels are fine16:52
cachiomvo: nice, I'll start right now16:53
mvothank you16:53
cachiomvo: http://people.canonical.com/~ubuntu-archive/proposed-migration/xenial/update_excuses.html#snapd16:56
cachiotests didn't run for xenial16:56
mvocachio: aha, I see, poking16:57
mvocachio: I raised in in #ubuntu-release17:00
mvocachio: there is no golang-1.10 for powerpc it seems17:00
cachiomvo: ahh17:00
cachioI'm running the other validations17:02
mupPR snapd#6605 closed: cmd/libsnap,osutil: fix parsing of mountinfo <Created by zyga> <Merged by zyga> <https://github.com/snapcore/snapd/pull/6605>17:24
mupPR snapd#6710 opened: tests: run spread tests on ubuntu 19.04 <Created by sergiocazzolato> <https://github.com/snapcore/snapd/pull/6710>17:26
mvocachio: 2.38.1 is in beta now, please validate17:59
cachiomvo: sure18:03
mupPR snapcraft#2532 opened: catkin stage-snaps test: limit to amd64, arm64, and armhf <Created by kyrofa> <https://github.com/snapcore/snapcraft/pull/2532>18:21
* zyga EODs18:57
mupPR snapcraft#2532 closed: catkin stage-snaps test: limit to amd64, arm64, and armhf <Created by kyrofa> <Closed by sergiusens> <https://github.com/snapcore/snapcraft/pull/2532>20:03
mupPR snapcraft#2533 opened: tests: classic confinement spread tests for and maven  <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/2533>20:33

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!