[00:11] <mup> PR snapcraft#1942 closed: Release changelog for 2.39.2 <Created by sergiusens> <Merged by sergiusens> <https://github.com/snapcore/snapcraft/pull/1942>
[01:54] <niemeyer> Why is every test failing with uboot-go now? Something changed behavior recently
[04:17] <elopio> snappy-m-o autopkgtest 1943 bionic:armhf bionic:amd64 bionic:arm64
[04:17] <snappy-m-o> elopio: I've just triggered your test.
[06:03] <mborzecki> morning
[07:27] <mborzecki> arch package updated to 2.31.1
[07:30] <mvo> mborzecki: yay, thank you
[07:31] <mborzecki> mvo: morning :)
[07:31] <mvo> mborzecki: and good morning to you as well :)
[07:31] <brett__> Need some advice on remote parts I've just built a snap on build.snapcraft.io more specifically its intended to be used as a part. I'm now building the snap that will use the above part. So the question is how does snapcraft find the part when its building my snap.  My understanding is that my part will be on the 'edge' channel until approved. Does that mean I need to swich snapcraft to the edge channel in order to find my part? Al
[07:31] <brett__> to find a local copy of the part?
[07:53] <kalikiana> good morning, snappy
[07:57] <brett__> Need some advice on remote parts I've just built a snap on build.snapcraft.io more specifically its intended to be used as a part. I'm now building the snap that will use the above part. So the question is how does snapcraft find the part when its building my snap.  My understanding is that my part will be on the 'edge' channel until approved. Does that mean I need to swich snapcraft to the edge channel in order to find my part? Al
[07:58] <brett__> to find a local copy of the part?
[07:58] <brett__> looks like it must be 9am somewhere in the world
[08:07] <kalikiana> hey brett__
[08:07] <kalikiana> Do you know about he parts wiki?
[08:07] <kalikiana> https://wiki.ubuntu.com/snapcraft/parts
[08:15] <pstolowski> morning!
[08:19] <brett__> I know about the parts wiki, but I assumed that its controlled.
[08:20] <brett__> I also note that there is only a very small no. of parts visible on the wiki, which makes me think that is not actually the primary parts source.
[08:20] <brett__> my part is still beta so I'm not certain its ready to be visible publicly
[08:21] <brett__> how can I make it available locally?
[08:22] <brett__> If I can make it available locally to the snap I'm building then I can confirm that it works correctly as a part.
[08:28] <kalikiana> brett__: just include the part in your snap's snapcraft.yaml directly for testing
[08:29] <kalikiana> or you could set SNAPCRAFT_PARTS_URI
[08:29] <kalikiana> (the default value is https://parts.snapcraft.io/v1/parts.yaml)
[08:31] <brett__> I don't understand what you mean by 'just include the part in your snap's snapcraft.yaml'
[08:31] <brett__> do you mean copy each of the  sections of the parts yaml into the snaps yaml.
[08:32] <brett__> This is likely to be messy as its a moderatley complex part, also what about all of the files scripts?
[08:32] <brett__> If you change the SNAPCRAFT_PARTS_URI can I point it at a local file system?
[08:34] <brett__> kalikiana - if I point it to a local file system, exactly what should it point to.
[08:35] <brett__> I've been googling the URI but can't find any useful doco.
[08:37] <kalikiana> brett__: see the URL above, that's what contents should look like
[08:41] <kalikiana> brett__: by include it in the yaml what I mean is, you can add it to parts: like a local part. it works the same way. you can do that with any remote part, the syntax is the same.
[08:47] <brett__> kalikiana, sorry but I don't understand what you mean
[08:47] <brett__> a local part defined a plugin to build it
[08:47] <brett__> what plug do I use when referencing a local part
[08:48] <brett__> also its not clear what you mean by: see the URL above, that's what contents should look like
[08:48] <brett__> clearly the uri points to a url with a 'parts.yaml' file.
[08:49] <kalikiana> brett__: say you have something like this `parts:
[08:49] <kalikiana>   foo:
[08:49] <kalikiana>     plugin: python3
[08:49] <kalikiana>     source: some-github-url`
[08:49] <brett__> I have no idea what a parts.yaml looks like and if it requires a url then that implies I need set up a private webserver
[08:49] <brett__> why 'python3'
[08:49] <kalikiana> brett__: "foo" is the name of the part. you can use it directly in your snapcraft.yaml
[08:49] <kalikiana> or, you get it from a remote part, where it's instead in the snapcraft.yaml of the remote part
[08:50] <brett__> my part is built with java, or is python3 the 'its a part' plugin
[08:50] <kalikiana> in both cases the definitoin looks the same
[08:50] <brett__> sorry which definition are you referring to.
[08:50] <kalikiana> brett__: I don't know what your part looks like, this is an example to explain it better :-)
[08:50] <brett__> I current have:
[08:51] <brett__> parts:   # build the web app   orion-monitor-webapp:     plugin: maven     source: git@bitbucket.org:sbsutton/orionmonitor.git     maven-options:       [-DskipTests=true]     organize:       war/orionmonitor-1.0-SNAPSHOT.war : webapps/orionmonitor.war     after: [tomcat-with-ssl]
[08:51] <brett__> ok, that wasn't helpful
[08:51] <brett__> this is the part
[08:51] <brett__> https://github.com/bsutton/tomcat-with-ssl-snap
[08:51] <brett__> currently in the snap I just have after: [tomcat-with-ssl]
[08:52] <brett__> so I've just tried your suggestion of including the part directly
[08:53] <brett__> Failed to pull source: unable to determine source type of 'https://github.com/bsutton/tomcat-with-ssl-snap'.
[08:53] <brett__> tomcat-with-ssl:     plugin: python3     source: https://github.com/bsutton/tomcat-with-ssl-snap
[08:53] <brett__> how do I tell it what the 'source type' is?
[08:53] <kalikiana> brett__: source-type: git
[08:54] <brett__> ok, it appears to be building :))
[08:55] <brett__> I think we need some more doco on this. I've re-read the user guide multiple times and it really doesn't explain how this works.
[08:55] <brett__> the python3 plugin was particularly non-obvious
[08:57] <brett__> I will put some notes on a forum post for others that might be interesed.
[08:57] <kalikiana> brett__: You can try `snapcraft help plugins` or more specifically `snapcraft help python3` to get docs
[08:57] <brett__> I have seen one warning that didn't make much sense
[08:57] <brett__> You must give at least one requirement to install (maybe you meant "pip install /home/bsutton/git/orionmonitor/snap-projects/installer/parts/tomcat-with-ssl/python-packages"?)
[08:59] <brett__> just looked at the doco for python3 plugin and it certainly wouldn't have led me to use it as you suggested.
[08:59] <brett__> the doco basically says to use it for python3 projects and my part is java so I would have just sailed past it.
[09:16] <mvo> hm, I'm getting "snap install hello\nerror: unable to contact snap store" - is that just me/my connection?
[09:19] <mborzecki> mvo: same here
[09:21] <pstolowski> mvo, and here
[09:21] <mvo> just asked in #snapstore and they know and work on it
[09:21] <mvo> fallout from the DDoS we introduced with the strict timeout handling
[09:21] <mvo> eh, strict schedule
[09:21] <mvo> time
[09:22] <mvo> and back
[09:23] <mborzecki> sorry :(
[09:24]  * mvo hugs niemeyer for his review
[09:24]  * niemeyer hugs mvo back
[09:27] <pstolowski> mvo, ack, thx
[09:34] <pedronis> mvo: you haven't yet changed to not wait for in progress default provider in prereq, right? I saw niemeyer, sadly we have a bit of the complication that what we need for bases vs default providers is a bit different, bases need to be ther much earlier, I suppose to run any hook
[09:34] <pedronis> s/niemeyer/niemeyer's comment/
[09:36] <mvo> pedronis: I changed it so that new snaps getting installed (base,default-providers) will be waited upon by the original snap only
[09:37] <pedronis> what is the original snap?
[09:37] <mvo> pedronis: snap install "foo" might pull in base,prepreq1,prepreq2, then foo will wait for these three but no waiting between the three new ones
[09:38] <mvo> pedronis: but you are right, I thing we only need the waits for bases, everything else now does not need any waits at all
[09:38] <pedronis> yes, we should need to avoid adding multiple tentative to install for other stuff
[09:38] <pedronis> but we need ot wait only for bases
[09:39] <pedronis> otoh it destroys the simplicity of comment's pov, life is complicated
[09:39] <mvo> pedronis: yeah, let me fix that, it will only wait for bases, the rest will be done in parallel. if you are looking at this, please let me know what further tests you would like to see
[09:41] <pedronis> mvo: I think the new code can support things that are default provider of each other, otoh it's a complicated contrived scenario
[09:41] <niemeyer> mvo, pedronis: In the future, we may be able to drop the requirement to wait on install altogether, and bake that logic in the "ready" feature
[09:41] <pedronis> s/complicated/competely/
[09:41] <niemeyer> I hope this is one of the things we'll be able to tackle in that next round of polishings
[09:42] <mvo> pedronis: yeah, I think so as well, I can build a test case for that if you want, I think the ciruclar case is handled now
[10:10] <Chipaca> I'm seeing snapd not release the unix socket when built with 1.10
[10:10] <Chipaca> on success; on error it releases it fine
[10:13] <mwhudson> Chipaca, mvo: i'm thinking of making go 1.10 the default for bionic btw
[10:13] <Chipaca> mwhudson: not 1.11? tsk
[10:13] <Chipaca> :-D
[10:14] <mwhudson> Chipaca: well i could shove git master in whatever state it's in into the archive the day before release, that's a sound plan, right?
[10:14] <Chipaca> mwhudson: well, it certainly rings a bell
[10:14] <mwhudson> i'm just happy we don't have a new architecture having its first release in an LTS this time...
[10:15] <Chipaca> mwhudson: seriously, 1.10 is fine for 18.04, but by 20.04 it'll be just as old as 1.6 is today
[10:15] <mwhudson> yeah i know
[10:15] <mwhudson> something else i also know: it's time for bed
[10:16]  * mwhudson zzz
[10:25] <mborzecki> FYI master is currently broken on Arch, can't start snaps, one user brought it up in snapd-git comments, i'm looking into it, otoh 2.31.1 works just fine
[10:26] <ogra_> cjwatson, is there something like "set -x" i can use in grub.cfg (i got u-boot->grub working but it falls over when loading the initrd (i think at least))
[10:32] <pstolowski> mborzecki, commented on one of your timer PRs
[10:32] <mborzecki> pstolowski: thanks
[10:35] <pstolowski> niemeyer, hey, thanks for the review of limitedbuffer! will you have a moment to take a look at #4551?
[10:35] <mup> PR #4551: ifacestate: do not auto-connect manually disconnected interfaces <Created by stolowski> <https://github.com/snapcore/snapd/pull/4551>
[10:39] <niemeyer> pstolowski: Is this a requirement for auto-connect to land?
[10:39] <niemeyer> pstolowski: Sorry, I meant interface hooks more generally
[10:40] <cjwatson> ogra_: set debug=<various things>.  set debug=all gives you everything though will be pretty noisy, but if you have a serial console so that you can capture the full dump it's the easiest way to start.  https://www.gnu.org/software/grub/manual/grub/grub.html#debug
[10:40] <ogra_> cjwatson, perfect, thanks !
[10:51] <pstolowski> niemeyer, no, it's independent
[10:52] <niemeyer> pstolowski: So I'd prefer to hold that back if you don't mind
[10:52] <niemeyer> pstolowski: Marking as blocked until the other features have landed, are working fine, and were released in stable
[10:52] <mup> PR snapd#4713 opened: cmd/snap: add self-strace to `snap run` <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/4713>
[10:53] <niemeyer> pstolowski: Otherwise this is one more change piled up on that already complex space
[10:53] <pstolowski> niemeyer, ok
[11:04] <mborzecki> any idea what's happening here? https://paste.ubuntu.com/p/JCMTRnG3PQ/ snapd from master
[11:04] <mborzecki> the last thing is: execve("/usr/lib/snapd/snap-device-helper", ["/usr/lib/snapd/snap-device-helpe"..., "add", "snap_ohmygiraffe_ohmygiraffe", "/sys/class/mem/null", "1:3"], 0x7fff4eff7858 /* 0 vars */) = -1 ENOENT (No such file or directory)
[11:05] <mborzecki> that's tough luck because /usr/lib/snapd/snap-device-helper is there in the filesystem
[11:08] <pstolowski> mborzecki, hmm this snap works here (core from beta)
[11:12] <mborzecki> pstolowski: i'm running 2.31.r470.g8fd74f718 (latest master), without reexec, that would make it ahead for --edge and --beta
[11:23] <cachio_> mvo, hey
[11:24] <cachio_> mvo, did you see the email that I sent you?
[11:24] <pstolowski> mborzecki, are you using core from edge? i think this helper needs to exist in core. it doesn't exist in beta core image, only in edge, so if you're running master you may have an incompatibility
[11:26] <mborzecki> pstolowski: yeah, i think it runs in the mount namespace of the snap, probably edge will work
[11:27] <mborzecki> pstolowski: .. and it does
[11:27] <mvo> cachio_: that i386 is unhappy? I saw it, could you point me to a log with the failures please?
[11:27] <pstolowski> good
[11:29] <cachio_> mvo, https://paste.ubuntu.com/p/mdJgCbyqnR/
[11:30] <cachio_> mvo, could be related to the problem that you mentioned yesterday?
[11:38] <mvo> cachio_: hm, that looks unreleated, strange, can you see with `snap changes` what is actually going on?
[11:41] <cachio_> mvo, it shows there is an auto-refresh
[11:42] <cachio_> mvo, I canceled it
[11:42] <cachio_> mvo, then the tests continues
[11:42] <cachio_> but , I tried 3 times and alwaut I saw the same problem
[11:43] <cachio_> mvo, I'll try again
[11:47] <mvo> cachio_: strange, I will try after lunch
[11:49] <cachio_> mvo,
[11:49] <cachio_> np
[11:52] <Facu> hi! where are snapd logs? I want to see what it's currently doing in my system (thanks!)
[11:53] <mborzecki> Facu: try `journalctl -u snapd`
[11:55] <Facu> mborzecki, last thing I see is "Started Snappy daemon.", 1h80m before, and it's been hammering a couple of CPUs since... maybe I should enable debug logs or something?
[11:55] <Facu> *1h20m
[11:57] <mborzecki> Facu: you can add SNAPD_DEBUG=1 its environment file, `systemctl cat snapd` to find the right path
[11:58] <Facu> mborzecki, done, and restarted, let's give it some minutes, thanks!
[12:05] <mup> PR snapcraft#1944 closed: tests: split the plugins tests in the same directory <Created by elopio> <Merged by sergiusens> <https://github.com/snapcore/snapcraft/pull/1944>
[12:20] <sergiusens> o/
[12:21] <sergiusens> mvo: fyi https://pastebin.ubuntu.com/p/Svkx2KZ8jG/
[12:23] <mvo> sergiusens: is this inside lxd? is it a new thing?
[12:23] <sergiusens> mvo:  inside lxd, yes
[12:24] <sergiusens> mvo: it looks like the same thing (except I used to trigger it on garbage collection)
[12:25] <mvo> sergiusens: hrm, drat - zyga will want to know about this once he is back
[12:42] <mup> PR snapd#4707 closed: tests: spread test for broadcom-asic-control interface <Created by sergiocazzolato> <Merged by sergiocazzolato> <https://github.com/snapcore/snapd/pull/4707>
[12:57] <sergiusens> snappy-m-o: autopkgtest 1943 bionic:amd64 bionic:arm64 bionic:armhf
[12:57] <snappy-m-o> sergiusens: I've just triggered your test.
[12:57] <sergiusens> mvo: we sort of need that working to make this https://forum.snapcraft.io/t/per-project-containers/388/16 a thing
[13:00] <Chipaca> sergiusens: is this running snapd in a snapped snapcraft inside a snapped lxd ?
[13:00] <diddledan> snapception
[13:01] <Chipaca> one day we're going to find we've completely melted the bits of the kernel that handle mount namespaces
[13:01] <Chipaca> it'll just be a 20MB array of half-bits
[13:12] <mvo> seb128: please invite pedronis for the install state.json call too
[13:13] <seb128> mvo, done
[13:16] <Son_Goku> I guess I'm spending lunch today updating snapd and snapd-glib
[13:16] <popey> \o/
[13:17] <Son_Goku> kyrofa, ping
[13:23] <kalikiana> Son_Goku: you'll have better luck next week, since he's taking care of a kid that just came out of the oven
[13:23] <Son_Goku> oh jeez, okay
[13:24] <kalikiana> Son_Goku: anything I could potentially help with?
[13:24] <sergiusens> Chipaca: all correct except the snapped snap, this is snapcraft in venv but installing what it snapped in the env (the forum post however is the full deal, snapcraft snap, any build-snaps entry installed running inside a container created by lxd as a snap or whatever mechanism)
[13:24] <Son_Goku> kalikiana, I'm looking to try to make some time to bang out an initial prototype of rpm-based snaps in snapcraft
[13:25] <Son_Goku> I wanted to know if there were any major changes to how snapcraft's backends work since we last looked at it in October
[13:26] <pedronis> Chipaca: was your question about errors in the new api?
[13:28] <Guest28851> hello, is there a way to specify the python interpreter to use on the package ?
[13:29] <Guest28851> instead of "python3", y want /usr/bin/python3.6
[13:29] <Guest28851> instead of "python3", I want /usr/bin/python3.6
[13:29] <kalikiana> Son_Goku: Not really, no.
[13:29] <kalikiana> Son_Goku: Also, awesome to hear that :-D
[13:29] <Guest28851> So it will always use the default python3 of the system?
[13:32] <Son_Goku> Guest28851: yes
[13:33] <Guest28851> Ok, thanks
[13:36] <Guest28851> Also, I think it doesn't support py3.6
[13:46] <mvo> seb128: ta!
[13:47] <jdstrand> mvo: hi! thinking about the OOM issue
[13:48] <jdstrand> mvo: with spread tests, how many squashfs are mounted?
[13:51] <jdstrand> mvo: I ask cause while we can trigger the issue and jjohansen is looking into it, it seems (to me) to trigger too quickly with the amount of ram the system is supposed to have (1.5G)
[13:52] <jdstrand> mvo: I'm kinda wondering if there are lingering squashfs mounts (but they seem to be cleaned up on snap remove, so not sure why that would be the case)
[13:55] <mvo> jdstrand: I need to investigate how many but I also see it when this test runs in isolation
[13:56] <mvo> jdstrand: its also not 1.5G - its 400M on i386 because it eats LowMem
[13:56] <mvo> jdstrand: so its ~400/15 steps before it dies
[13:56] <mvo> (that is my rought esitmate)
[14:06] <jdstrand> jjohansen: fyi ^
[14:11] <jdstrand> mvo: there is LowTotal and LowFree. in /proc/meminfo. in my i386 VM with artful release kernel (no spectre/meltdown) and -updates kernels, I see that LowTotal is approaching the total ram (768M in the VM I'm using).
[14:12] <jdstrand> mvo: I realize LowFree is going down because there is a leak, but, for example, I only have 200M in LowFree here, but it takes a few loops to hit the condidition
[14:13] <jdstrand> mvo: as opposed to load, connect, oom
[14:14] <jdstrand> mvo: which you observe with twice the ram and twise the LowFree
[14:14] <jdstrand> twice*
[14:24]  * kalikiana lunch
[14:28] <jdstrand> jjohansen: fyi, note my followup ^
[14:29]  * pstolowski lunch
[14:39] <mup> PR snapcraft#1945 opened: elf: clear execstack by default <bug> <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1945>
[14:39] <sergiusens> jdstrand: when you have time, mind reviewing https://github.com/snapcore/snapcraft/pull/1945 ?
[14:39] <mup> PR snapcraft#1945: elf: clear execstack by default <bug> <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1945>
[14:45] <jdstrand> sergiusens: sure
[14:46] <sergiusens> thanks
[14:53] <mup> Bug #1750840 opened: [Enhancement] Please show (optionally) size consumed by snap in snap list <enhancement> <Snappy:New> <https://launchpad.net/bugs/1750840>
[14:57] <mup> PR snapcraft#1877 closed: tests: move test files out of the snapcraft dir <Created by elopio> <Closed by elopio> <https://github.com/snapcore/snapcraft/pull/1877>
[15:42] <mup> PR snapcraft#1892 closed: meta: parse float version as string <Created by kalikiana> <Closed by sergiusens> <https://github.com/snapcore/snapcraft/pull/1892>
[15:50] <xnox> mvo, 2.31.1+18.04	snapd/2.31.1+18.04 systemd/237-3ubuntu3 still red on s390x, but passes on other arches =(
[15:50] <xnox> this is latest snapd, triggered together with latest systemd.
[15:50] <xnox> + [[ ubuntu-18.04-s390x == ubuntu-14.04-* ]]
[15:50] <xnox> + systemctl stop dbus-provider
[15:50] <xnox> Failed to stop dbus-provider.service: Unit dbus-provider.service not loaded.
[15:50] <xnox> hm... something interesting is expected on s390x?
[15:50] <xnox> what is "location-control"?
[15:50] <xnox> 2018-02-21 14:31:31 Failed task prepare: 1
[15:50] <xnox>     - autopkgtest:ubuntu-18.04-s390x:tests/main/interfaces-location-control
[15:50] <xnox> 2018-02-21 14:31:31 Failed task restore: 1
[15:50] <xnox>     - autopkgtest:ubuntu-18.04-s390x:tests/main/interfaces-location-control
[15:50] <xnox> error: unsuccessful run
[15:58] <pedronis> xnox: an interface to control locationd  I think
[16:02] <xnox> pedronis, is that available on s390x?
[16:02] <xnox> $ apt search locationd -> gives me nothing
[16:02] <xnox> pedronis, what is locationd?
[16:04] <mvo> xnox: yeah, s390x used to be virtual and now is (more) real so the tests run for the first time
[16:05] <mvo> xnox: all sorts of test snaps missing, we are making progress there
[16:05] <mvo> xnox: thanks for including my systemd "fix" (workaround) for the apparmor label issue!
[16:06] <pedronis> xnox: sorry locationd is the name of a snap, I think the deb is ubuntu-location-service-*
[16:07] <pedronis> it's for GPS access basically
[16:08] <xnox> ok, source package is location-service and that does not exist for s390x
[16:08] <xnox> and has been removed everywhere post-xenial
[16:09] <xnox> pedronis, mvo, none of the location service tests should be running on bionic+, no? and definately not on s390x
[16:09] <pedronis> xnox: well the snap still exists
[16:09] <pedronis> for core
[16:09] <pedronis> so they need to run somewhere
[16:09] <pedronis> but for sure not s390x
[16:09] <xnox> true
[16:09] <xnox> location-service is in dep-wait state since xenial
[16:10] <xnox> Missing build dependencies: libubuntu-platform-hardware-api-headers, libubuntu-platform-hardware-api-dev
[16:10] <xnox> i don't think you will be able to build location-service on s390x ever.
[16:15] <xnox> mvo, pedronis - so i think we should open a bug; badtest snpad on s390x; and release current systemd & snapd.
[16:16] <mvo> xnox: yeah, lets put snapd on s390x in the "ignore" list for promotion for now, we work towards fixing it
[16:17] <mvo> xnox: and we will skip this particular test on s390x as a start (I'm sure there are more)
[16:17] <mvo> (more that will need fixing)
[16:19] <xnox> https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1750856
[16:19] <mup> Bug #1750856: snapd on s390x tried to run locationd tests, which does not exist on s390x <snapd (Ubuntu):New> <snapd (Ubuntu Bionic):New> <https://launchpad.net/bugs/1750856>
[16:19] <sergiusens> niemeyer: https://forum.snapcraft.io/t/fixing-the-snapcraft-cache-lp-1582469/4127 (for when you have time)
[16:20] <mup> Bug #1750856 opened: snapd on s390x tried to run locationd tests, which does not exist on s390x <britney:New> <Snappy:New> <snapd (Ubuntu):New> <snapd (Ubuntu Bionic):New> <https://launchpad.net/bugs/1750856>
[16:27] <xnox> slangasek, https://code.launchpad.net/~xnox/britney/badtest-snapd-s390x-locationd/+merge/338446 please =) based on above conversation and bugs
[16:48] <slangasek> xnox: done
[16:51] <mvo> cachio__: what are the chances for 2.31.1 for candidate? is i386 still give you trouble?
[16:51] <mvo> xnox: thank you!
[16:55] <cachio__> mvo, I am researching the last failure
[16:56] <cachio__> mvo, after that we can go to candidate
[16:57] <mvo> cachio__: thank you!
[17:01] <niemeyer> sergiusens: Noted and replied (a while ago, just for the record)
[17:01] <niemeyer> jdstrand: ping
[17:02] <jdstrand> niemeyer: hey
[17:02] <niemeyer> jdstrand: Heya, do you
[17:02] <niemeyer> have a moment for a call?
[17:02] <jdstrand> niemeyer: yes
[17:03] <niemeyer> jdstrand: https://hangouts.google.com/hangouts/_/canonical.com/snap-interfaces
[17:03] <cachio__> mvo, 2.31.1 is in candidate now
[17:08] <mvo> cachio__: \o/
[17:08] <mvo> cachio__: thank you
[17:08] <cachio__> mvo, np
[17:11] <sergiusens> niemeyer: thanks
[17:21] <Chipaca> unnnh
[17:21] <Chipaca> why are weekdays in timeutils called weeks?
[17:21] <Chipaca> that is very confusing
[17:25]  * kalikiana wrapping up for the day
[17:26] <kalikiana> Chipaca: I vote for calling them years, much clearer that way :-P
[17:27] <kalikiana> Chipaca: Actually sounds like a Japanese thing. Like saying "convenience" when you mean a shop, or saying "work" when you mean a part-time job.
[17:28] <kalikiana> You add the missing word in your head
[17:30] <mup> PR snapcraft#1946 opened: errors: add ability to send stack traces to sentry <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1946>
[17:31] <Chipaca> kalikiana: my problem isn't with synecdoche per se
[17:31] <Chipaca> or would this be metonymy
[17:31] <Chipaca> darn it
[17:32] <Chipaca> anyway, my problem isn't with the figure of speech :-)
[17:33] <mup> PR snapcraft#1946 closed: errors: add ability to send stack traces to sentry <Created by sergiusens> <Closed by sergiusens> <https://github.com/snapcore/snapcraft/pull/1946>
[17:33] <mup> PR snapcraft#1947 opened: errors: add ability to send stack traces to sentry <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/1947>
[17:36] <cachio__> jdstrand, hey
[17:37] <cachio__> I am making an update on the interface tests, and removing part of the connection checks
[17:37] <cachio__> asi we are already checking that in the interfaces-many test
[17:38] <cachio__> jdstrand, do you have any idea about how to check that the autoconnection is done or not as part of this test?
[17:38]  * Chipaca -> walk
[18:13] <mvo> pedronis: we still want  4599 for 2.32, right?
[18:16] <pedronis> mvo: yes
[18:17] <pedronis> mvo: I was having dinner
[18:18] <mvo> pedronis: np, I see what I can do tonight about the review, otherwise tomorrow early
[18:18] <mvo> pedronis: looks like 2.32 can will be branched tomorrow morning(ish)
[18:19] <pedronis> mvo: I'm trying to see if I can re-review your branch still tonight
[18:23] <popey> Is there any way to switch a snap to devmode without removing it (and thus losing all my data)?
[18:23] <popey> snap refresh doesn't seem to take any notice of the --devmode flag
[18:30] <pedronis> mvo: left some comments still
[18:30] <pedronis> mvo: did we lose waiting for the base case?
[19:04] <mup> PR snapcraft#1948 opened: tests: move test files out of the snapcraft dir <Created by elopio> <https://github.com/snapcore/snapcraft/pull/1948>
[19:04] <mvo> pedronis: hm, iirc I added code that would make everything in the change wait for the base snap, let me double check
[19:06] <mvo> pedronis: thanks for your comments, I'm looking now
[19:08] <mvo> pedronis: aha, I see what you mean now, checking
[19:18] <pedronis> mvo: it's called setup-profiles  not setup-profile I think
[19:20] <mvo> pedronis: indeed, thanks. I re-added the wait, the test is still missing, I look into this next
[19:29]  * pedronis eods
[19:38] <jdstrand> xnox: hehe "And mainframes don't usually change their
[19:38] <jdstrand> GPS location ;-)
[19:38] <jdstrand> "
[20:43] <pat-s> Hi guys, does anyone know why the LD paths of my binary are altered during the prime stage? Everythings fine during build/install/stage until it gets primed..
[20:49] <mup> PR snapd#4563 closed: tests: new spread test for gpio-memory-control interface <Created by sergiocazzolato> <Merged by sergiocazzolato> <https://github.com/snapcore/snapd/pull/4563>
[23:17] <mup> PR snapd#4714 opened: interfaces/apparmor,system-key: add upperdir snippets for strict snaps on livecd <Created by jdstrand> <https://github.com/snapcore/snapd/pull/4714>