[05:20] <mborzecki> morning
[06:38] <mborzecki> mvo: morning
[06:39] <mvo> hey mborzecki
[07:03] <pstolowski> morning
[07:07] <mborzecki> pstolowski: hey
[07:13] <mvo> good morning pstolowski
[07:14] <mvo> pstolowski: 7016 has two +1, can I squash it and cherry pick it to 2.40?
[07:14] <mborzecki> https://github.com/snapcore/snapd/pull/7093 can land easily
[07:15] <mvo> pstolowski: also see my last comment, AIUI the sublevel patch is applied not more often now than before? or am I misreading something?
[07:15] <pstolowski> mvo: yes! thank you
[07:15] <mvo> pstolowski: thank *you*
[07:36] <mborzecki> passing a suite selector with ... on spread command line, those are nor ORed apparently
[07:38] <mborzecki> duh, manual: true :/
[07:41] <zyga> Hey, trying to connect from my laptop but fighting network issue
[07:41] <zyga> Somehow cannot tether
[08:05] <pedronis> mborzecki: reviewed again 6890, it needs a 2nd review, John gave it but has changed a lot since.
[08:06] <mborzecki> pedronis: great, i'll ping Chipaca when he's around
[08:41] <zyga> re
[08:41] <zyga> yay
[09:38] <ackk> hi, is there a way for a (confined) snap to know the channel it's tracking?
[09:39] <zyga> ackk: no, but we discussed adding that via snapctl
[09:39] <zyga> ackk: perhaps pstolowski knows this?
[09:41] <pedronis> I don't remember discussing this in particular
[09:42] <pstolowski> yep, as pedronis says
[09:42] <pedronis> we mainly discussed: checking whether an interface is connected, check whether there is new revision, and very recently checking whether there's a missed/pending auto-refresh because an app was running
[09:43] <pedronis> I think is not totally out of question to return this info in the 2nd of that list
[09:43] <pedronis> but I would like to understand the use case
[09:44] <ackk> pedronis, in maas' case, we have an option to deploy a machine as a maas rackcontroller, which installs the snap. ideally we want the same version of the currently running maas. of course there's no guarantee that the channel doesn't have a newer revision, but pointing the new machine to the same channel is the best option
[09:45] <pedronis> ackk: you want to use cohorts for that
[09:45] <pedronis> that's what they are designed for
[09:45] <ackk> pedronis, how do they work?
[09:45] <pedronis> make sure a cluster always uses the same revision
[09:46] <pedronis> ackk: https://forum.snapcraft.io/t/managing-cohorts/8995  they will be in 2.40
[09:46]  * ackk reads up
[09:47] <ackk> pedronis, so the idea is you create a cohort on machine A ("pinning" the version of maas), deploy machine B and install maas using the cohort, delete the cohort?
[09:48] <pedronis> ackk: yes, you don't delete cohorts though
[09:48] <pedronis> they don't take space in the store
[09:48] <ackk> pedronis, but they will prevent both machines for getting updated if a new snap is released right?
[09:48] <zyga> pedronis: we did discuss checking which channel is being tracked in lyon
[09:49] <pedronis> zyga: I don't remember, maybe in a conversation I was not in, or maybe it was before I arrived
[09:49] <zyga> pedronis: perhaps in a hallway but I'm sure we had that conversation
[09:49] <zyga> I don't recall that detail
[09:49] <pedronis> ackk: they will prevent the machine to get updated for 90 days, then they update but they will again update to the same revisions
[09:50] <ackk> pedronis, ok. so if we don't want to hold the update, just to get them on the same channel, deleting is fine. correct?
[09:50] <pedronis> ackk: deleting what?
[09:50] <ackk> pedronis, the cohort after installing machine B
[09:50] <pedronis> ackk: there is not deleting of cohorts
[09:51] <ackk> sorry, i mean, leaving it
[09:51] <pedronis> ackk: no, if you leave it
[09:51] <pedronis> they will potentially go out of sync
[09:52] <pedronis> if you want to refresh you do the same
[09:52] <pedronis> get a new cohort
[09:52] <pedronis> on a machine
[09:52] <pedronis> and refresh again on the others using that cohort
[09:52] <zyga> mborzecki: found one more leaky test https://github.com/snapcore/snapd/pull/7091/commits/bace705b928d065a61870cf16953b652ad5c576a
[09:53] <ackk> pedronis, but you have to leave the cohoort to refresh, right?
[09:53] <pedronis> ackk: no
[09:53] <ondra> zyga ping
[09:54] <zyga> hey ondra
[09:54] <pedronis> acck: you do snap refresh --cohort=<new-cohort>
[09:54] <ondra> zyga I have machine which just went into funny state

[09:54] <ondra> zyga opengrok-2 opengrok.tomcat[3188]: cannot locate base snap core18: No such file or directory
[09:54] <ondra> zyga ut
[09:54] <ondra> zyga it's classic ubuntu 18.04
[09:54] <pedronis> ackk: you can switch from cohort to cohort without leaving using those ops
[09:54] <zyga> ondra: is core18 mounted?
[09:54] <ackk> pedronis, oh, I see, I thought create-cohort would be related to what you have, I see it's not
[09:55] <zyga> ondra: mborzecki reported weird system state after systemd update today
[09:55] <ondra> zyga it has not rebooted for weeks
[09:55] <ondra> zyga it's mounted, but there is not sym link to 'current'
[09:55] <zyga> ondra: but systemd updates in place
[09:56] <zyga> ondra: oh, that's curiour
[09:56] <mborzecki> ondra: looked like systemd lost track of loopback devices and where they were mounted at after it was daemon-reexec'ed during the update
[09:56] <zyga> ondra: can you check snap changes?
[09:56] <zyga> mvo: ^^
[09:56] <ackk> pedronis, I see, thanks
[09:56] <zyga> mvo: current went away on ondra's machine
[09:56] <ondra> zyga yep 86   Error   today at 07:04 UTC      today at 07:04 UTC      Auto-refresh snap "core18"
[09:57] <ondra> zyga https://paste.ubuntu.com/p/G4rT485kFr/
[09:57] <ondra> zyga hmm Error   today at 07:04 UTC  today at 07:04 UTC  Make current revision for snap "core18" unavailable
[09:58] <ondra> zyga I did not. update systemd for ages there
[09:58] <zyga> ondra: can you show snap tasks  for that change please?
[09:58] <ondra> zyga this machine is running web service, so I did not touch it for several weeks/months
[09:59] <zyga> ondra: some updates auto apply via security pocket
[09:59] <ondra> zyga do you mean that paste bin I sent?
[09:59] <zyga> oh, looking
[09:59] <zyga> ondra: your fs is broken
[09:59] <zyga> ondra: is / writable?
[09:59] <ondra> zyga it's classic ubuntu
[10:00] <zyga> ondra: is it writable
[10:00] <zyga> looks like it switched to read only mode
[10:00] <ondra> zyga looks like
[10:00] <ondra> I was able to touch file in home
[10:00] <ondra> zyga and root
[10:01] <zyga> actually, the error is curious
[10:01] <zyga> https://www.irccloud.com/pastebin/CEml5ETz/
[10:01] <zyga> er
[10:02] <zyga> that's the $SNAP_DATA data-set
[10:02] <zyga> is /var/snap mounted on another partition>?
[10:02] <ondra> zyga it is :)
[10:03] <zyga> is that partition read only?
[10:03] <ondra> zyga nope writable
[10:04] <zyga> ondra: then I don't know
[10:04] <zyga> ondra: check system logs for around 2019-07-12T07:04:46Z
[10:04] <zyga> maybe something happened then?
[10:06] <ondra> zyga no I could only see there that read only error
[10:07] <ondra> zyga so manual 'snap refresh core18' went fine and fixed it
[10:07] <zyga> ondra: which cloud provider is this?
[10:11] <mvo> pstolowski: thanks for 7096! really nice to see how straightforward this is now
[10:11] <pstolowski> mvo: it became so much simplier with nulls internally
[10:12] <mvo> pstolowski: cool stuff!
[10:12] <pstolowski> i could throw away a large chunk of the new code
[10:13] <ondra> zyga you do not want to know, canonistack :D
[10:13] <ondra> zyga I blame it on cosmic rays
[10:13] <zyga> pstolowski: with https://github.com/snapcore/snapd/pull/7096/files we could look at tests that set stuff to unset it
[10:14] <zyga> I have a few for sure
[10:14] <zyga> thanks for that!
[10:17] <zyga> pstolowski: https://github.com/snapcore/snapd/pull/7096#pullrequestreview-261176295
[10:18] <pstolowski> zyga: thanks!
[10:20] <Chipaca> pedronis: fwiw, fyi, https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1828500
[10:21] <zyga> pstolowski: one more https://github.com/snapcore/snapd/pull/7096/files#r302922240
[10:21] <pedronis> Chipaca: we need to loop in whoever is making those images
[10:22] <Chipaca> pedronis: loop in, yes. With steel wire around <body parts>.
[10:22] <Chipaca> pedronis: assuming my suspicion is correct
[10:22] <Chipaca> which I suspect it is :-)
[10:22] <pstolowski> zyga: right, thanks!
[10:22] <Chipaca> anyhow, time for a break
[10:22]  * Chipaca breaks things
[10:27] <mborzecki> this is interesting https://forum.snapcraft.io/t/too-early-for-operations/12243/8
[10:27] <mborzecki> shouldn't core be listed there?
[10:28] <zyga> mborzecki: yes
[10:28] <zyga> whoever makes that seed needs to fix it
[10:29] <mborzecki> hm that's 19.10 :) at least it hasn't been released yet
[10:30] <mborzecki> mvo: do you know how seed.yaml is generated for those images?
[10:31] <zyga> mborzecki: perhaps manually
[10:31] <zyga> mborzecki: many people don't know about snap prepare-image and use a page full of shell to do the "equivalent"
[10:31] <zyga> given that there are on snap IDs there I bet it is the shell versioni
[10:32] <mvo> mborzecki: what image exactly?
[10:33] <mborzecki> mvo: see the topic, someone installed 19.10 from a daily image, seed.yaml looks broken
[10:33] <mvo> mborzecki: :( I think those seeds are generated "manually" by some livecd-rootfs shell. we can ask foundations (cc sil2100) how 19.10 daily image seed.yaml is generated and if it can be fixed
[10:34] <pedronis> we are trying to move them to prepare-image --classic
[10:34] <pedronis> but given the complexity of livecd-rootfs, is not trivial
[10:40] <diddledan> Chipaca: I just added the seed.yaml from a hyper-v image
[10:40] <diddledan> ref: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1828500
[10:46] <Chipaca> diddledan: thanks! could you do the followup, ie sudo snap info --verbose /var/lib/snapd/seed/snaps/*.snap | grep base:  ?
[10:48] <diddledan> done
[10:49] <Chipaca> diddledan: who creates these images, do you know?
[10:49] <diddledan> I don't
[10:50] <diddledan> looks like willcooke might know as he posted the blog entry about them https://ubuntu.com/blog/optimised-ubuntu-desktop-images-available-in-microsoft-hyper-v-gallery
[10:51] <Chipaca> willcooke: can we talk about QAing stuff :/
[10:52] <zyga> I can try the hyper v image later if anyone needs me to
[10:53] <diddledan> does snapd wait for the user to login before doing any pre-seed? I'm wondering if the way those images logs you in is confusing matters - it uses XRDP over a Hyper-V internal socket
[10:54] <Chipaca> diddledan: no it does not
[10:54] <diddledan> ok, ignore that then :-p
[10:54]  * diddledan throws more random at the wall until stuff sticks :-D
[10:54] <Chipaca> diddledan: it's simple, the seed.yaml needs to be self-contained
[10:54] <willcooke> Chipaca, it has been working previously, because things like calc are snaps.  I have a feeling we have talked about this problem previously, but kenvandine will need to comment further.
[10:54] <Chipaca> diddledan: if a snap in seed needs core18, core18 needs to be in the seed
[10:55] <diddledan> aha. so the seed.yaml is missing core18!
[10:55] <Chipaca> willcooke: calc and things have needed core18 since shortly after 18.04
[10:55] <Chipaca> willcooke: those images have gone out, and we blogged about them, without them working: the _seeded_ snaps have core18
[10:55] <diddledan> that makes sense, because core did get installed by the pre-seed
[10:55] <Chipaca> this isn't a weird things-got-updated-after-image-baking, the image is baked and has not been tested
[10:56] <Chipaca> willcooke: thus me asking about QA
[10:57] <Chipaca> willcooke: you have the same issue in dailies fwiw, if that's also 'under' you
[10:57] <Chipaca> willcooke: https://forum.snapcraft.io/t/too-early-for-operations/12243?u=chipaca
[10:58] <Chipaca> (i'm less concerned about these because those are explicitly not qa'd)
[11:00] <willcooke> Chipaca, kenvandine knows about this, he'll be online in an hour or so I expect
[11:01] <Chipaca> ok
[11:01] <Chipaca> willcooke: both of these?
[11:02] <willcooke> Chipaca, yes, I think it's the same problem, and I have a feeling that he's spoken to someone in your team about this recently. I thought he'd worked around it, but it would seem not
[11:03] <Chipaca> willcooke: I'll wait and talk with kenvandine when they're around, then
[11:03] <Chipaca> my view is very snapd-centric but I don't see what there's to workaround :-)
[11:07] <ackk> I've been seeing a weird behavior recently. At times some snaps (maybe it's actually just core18) show up in nautilus as removable devices. If I click on it it gets mounted under /media and I see the content, but the file is not actually there: https://paste.ubuntu.com/p/9DMCxHR3RN/
[11:09]  * zyga sees that as more rationale to mount snaps in dedicated namespaces only
[11:10] <ackk> zyga, I don't understand how nautilus finds it, the file is not there but I can mount/umount it multiple times and it actually works
[11:11] <zyga> ackk: nautilus probably tracks all mount ops
[11:14] <ackk> zyga, yeah but the snap is gone, and it's not mounted anymore
[11:14] <Chipaca> ackk: wait until you notice using tab completion on snap makes devices flicker in nautilus
[11:14] <Chipaca> ackk: *that* will do your head in
[11:14] <zyga> Chipaca: whaaat?
[11:14] <ackk> wat?
[11:15] <Chipaca> not all the time, sadly
[11:15] <mborzecki> zyga: rawhide cloud image https://paste.ubuntu.com/p/FTTpjJv2Hj/ hm doesn't look like only v2
[11:15] <zyga> mborzecki: perhaps not switched by default yet, the plan is to by release though
[11:16] <mborzecki> zyga: mhm
[11:19] <mborzecki> zyga: ok, on cgroup2 now only
[11:19] <zyga> mborzecki: woot
[11:19] <zyga> mborzecki: I'm fixing leaky tests
[11:19] <zyga> mborzecki: found a good way now
[11:19] <mborzecki> zyga: i mean the image is switched to unified hierarchy https://paste.ubuntu.com/p/YzcFg4rD7z/ ;)
[11:20] <zyga> mborzecki: ?
[11:20] <zyga> mborzecki: did you switch yourself
[11:20] <zyga> mborzecki: or did you update?
[11:20] <zyga> mborzecki: I like one thing about v2
[11:20] <zyga> mborzecki: run mount
[11:20] <zyga> mborzecki: see how short that is? :D
[11:20] <mborzecki> zyga: switched it myself, this the yesterday's compose
[11:21] <mborzecki> zyga: shorter but not super short :)
[11:22] <zyga> mborzecki: how long is it?
[11:25] <mborzecki> zyga: https://paste.ubuntu.com/p/sbfygMW6Hr/
[11:25]  * zyga sees more bpf
[11:26] <mborzecki> heh
[11:27] <mborzecki> poeple seem use some automation to track upstream releases for aur packages, mvo uploaded a release 2h go, 40 minutes ago got an email from aur that the package has been flagged already
[11:31] <zyga> god, why is apt removing lxd all the time
[11:31] <zyga> mvo: how can I check that a package is installed vs auto-installed
[11:41] <mvo> zyga: apt show lxd
[11:41] <mvo> zyga: watch out for "APT-Manual-Installed: yes
[11:41] <mvo> "
[11:41] <zyga> mvo: thank you
[11:41] <zyga> mvo: tests are wreaking havoc to lxc preinstalled in cloud image
[11:41] <zyga> mvo: because unrelated apt-get autoremove other-package # removes lxd
[11:43] <mvo> fun
[11:43] <mvo> apt install xld
[11:43] <mvo> apt install lxd
[11:43] <mvo> will set lxd to manual installed
[11:43] <zyga> yeah, I'm just double checking the initial state
[11:43] <zyga> the qemu images don't have lxd
[11:43] <zyga> but cloud images do
[11:44]  * pstolowski lunch
[11:45] <mvo> 7k closed PRs
[11:46]  * mvo is impressed
[11:55] <zyga> mvo: so, I don't see the manual flag
[11:55] <zyga> mvo: but apt autoremove doesn't remove it
[11:56] <zyga> https://www.irccloud.com/pastebin/qI2YEpuO/
[11:58] <mborzecki> Chipaca: also https://forum.snapcraft.io/t/too-early-for-operations/12243/ is a daily 19.10 image
[11:58] <zyga> hmmmm
[11:59] <zyga> gosh
[11:59] <zyga> what is making this broken :.
[12:01] <Chipaca> zyga: which thing?
[12:08] <zyga> Chipaca: tests installing and removing packages
[12:08] <zyga> I don't understand it yet
[12:08] <zyga> but some remove operations remove lxd
[12:08] <zyga> why, don't know
[12:08] <zyga> fighting spread to give me shell before changing the image much
[12:09] <zyga> usability of spread is sometimes lacking
[12:10] <mvo> zyga: that means its used by something
[12:11] <zyga> mvo: is that apt show line going to show me this  information reliably, that it is auto-installed?
[12:11] <zyga> mvo: I want to understand what is breaking the state (changing it from manual to auto)
[12:11]  * zyga is extremely fed up with the heat
[12:12] <mvo> zyga: maybe we can have a quick chat ? I'm not sure I have enough context but there are a bunch of debug option available to figure out why something gets removed
[12:12] <mvo> zyga: do you have a link to a log or something?
[12:12] <zyga> mvo: I'm tired, need a break
[12:12] <zyga> mvo: I will look at this later
[12:14] <jdstrand> ackk: not sure you say from yesterday: 17:24 < jdstrand> ackk: fyi, https://people.canonical.com/~jamie/snapd-daemon-user/
[12:14] <mvo> zyga: yeah, lets just talk after the standup
[12:20] <mborzecki> zyga: broken snaps looks like a larger problem with systemd/5.2 kernel
[12:20] <mborzecki> zyga: just got the same state after a reboot
[12:22] <mborzecki> zyga: https://paste.ubuntu.com/p/QMH66ZXcNW/
[12:24] <mborzecki> arch package updated
[12:25] <Chipaca> drat, late for lunch
[12:25]  * Chipaca runs
[12:49]  * Chipaca om noms
[12:50]  * Chipaca is on fire
[12:50]  * Chipaca makes a note to use fewer chillies next time
[13:01] <Eighth_Doctor> ugh...
[13:01]  * Chipaca omw
[13:01] <Eighth_Doctor> https://twitter.com/hughsient/status/1149663770026733571
[13:01] <Eighth_Doctor> https://blogs.gnome.org/hughsie/2019/07/12/gnome-software-in-fedora-will-no-longer-support-snapd/
[13:03] <Chipaca> I was particularly perturbed by him saying we're at war
[13:08] <diddledan> "One ISO consortium member asks whether they should remove hydrogen engine support from the ISO car as they feel BMW is not playing fair." <-- that's the crux, it is being used as a punishment for daring to consider alternatives
[13:10] <diddledan> and let's be clear. the investigation is barely 20 days old
[13:15] <Eighth_Doctor> diddledan: imo, having something that isn't g-s for the non-gnome flavors would be useful anyway
[13:15] <Eighth_Doctor> right now, there's nothing available for DEs that don't have their own software management frontend
[13:22] <Eighth_Doctor> like MATE, for example
[13:24] <Eighth_Doctor> diddledan: I'm not happy that I'm being caught in the crossfire though :(
[13:27] <mborzecki> zyga: added a topic about mount problems with 5.2 I see here: https://forum.snapcraft.io/t/snap-mounts-broken-with-kernel-5-2-and-systemd-242/12272
[13:28] <zyga> thank you, looking
[13:31] <Chipaca> Eighth_Doctor: is there anything i can do to help
[13:31] <diddledan> cuddles?
[13:31] <Chipaca> Eighth_Doctor: even if it's just something like, send you pizza or flowers or beer or something
[13:31] <Eighth_Doctor> Chipaca: I don't know
[13:31] <Eighth_Doctor> haha
[13:31] <zyga> mborzecki: thank you for reminding me about 2.40; I'll update openSUSE and look at Debian
[13:32] <Eighth_Doctor> zyga: is 2.40 out now?
[13:32] <mborzecki> Eighth_Doctor: yes
[13:32] <Eighth_Doctor> joy...
[13:32] <roadmr> https://snapcraft.io/core
[13:33] <Eighth_Doctor> alright, since hughsie is being a jerk, I guess I have to package gnome-software-snap separately
[13:34] <Eighth_Doctor> Chipaca: what makes this situation worse is that he's trapped me with the snap store thing
[13:34] <Eighth_Doctor> I don't know if I dare bring up this to FESCo or the Council, because he might try to use it to get snapd removed from the distro entirely
[13:34]  * diddledan wants 2.41 already :-p ('cos it'll have my change from https://github.com/snapcore/snapd/pull/6876)
[13:35] <Eighth_Doctor> Chipaca: he put that in the changelog entry for justification: https://src.fedoraproject.org/rpms/gnome-software/c/ef1746da8c7613c6bd29d9b9bbfbe95c4291bc85
[13:37] <Chipaca> Eighth_Doctor: *hugs*
[13:37] <diddledan> surely all this throwing of toys is just gonna force Ubuntu to not use gnome software ever again?
[13:37] <Eighth_Doctor> to me, it looks like he's just unhappy he can't have flathub
[13:38] <Eighth_Doctor> that's what it seems like to me, but he says he was already told Ubuntu wasn't going to
[13:39] <Eighth_Doctor> I think I've handled this fairly well, all things considered: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/O4CMUKPHMMJ5W7OPZN2E7BYTVZWCRQHU/
[13:45] <zyga> Eighth_Doctor: oh man
[13:45] <zyga> Eighth_Doctor: changelog entry
[13:46] <Eighth_Doctor> zyga: yup
[13:46] <zyga> Eighth_Doctor: yeah, it feels like a lot of hurt feelings now
[13:46] <pstolowski> mvo: btw #6977 needs your re-review
[13:47] <zyga> Eighth_Doctor: thank you for participating in that discussion, I feel that it is unfortunate it ended up like that and that  some heat-of-the-moment-commit-messages will be regretted eventually
[13:47] <zyga> but what's done is done
[13:47] <mvo> pstolowski: aha, nice - thank you!
[13:48] <Eighth_Doctor> zyga: I guess I'm packaging gnome-software-snap separately now
[13:48] <Eighth_Doctor> god damn all of this
[13:49] <mvo>  /o\
[13:49] <mborzecki> Eighth_Doctor: hopefully it's smooth as long as g-s doesn't break the a[pb]i
[13:49] <mvo> Eighth_Doctor: sorry for the trouble
[13:49] <Eighth_Doctor> mborzecki: ABI breaks every release
[13:50] <Eighth_Doctor> that's why no one does out of tree plugins
[13:55] <zyga> https://twitter.com/hughsient/status/1149677435933224960
[14:12]  * cachio afk
[14:40] <kenvandine> mvo: https://git.launchpad.net/~ubuntu-core-dev/ubuntu-seeds/+git/ubuntu/commit/?h=eoan&id=6f3707c8dd8fdc7c4c0ca1eeff428fa0e4678db6
[14:41] <diddledan> \o/
[14:41] <diddledan> awesome, kenvandine :-)
[14:41] <Chipaca> kenvandine: note you also need to have 'core'
[14:42] <kenvandine> Chipaca: iirc livecd-rootfs pulls in core explicitly
[14:42] <Chipaca> dunno if the new logic thing does that
[14:42] <kenvandine> or was
[14:42] <kenvandine> right
[14:42] <diddledan> you need to have both now
[14:42] <Chipaca> kenvandine: yesterday's daily did not
[14:42] <kenvandine> right
[14:42] <Chipaca> kenvandine: that's what the forum issue was about
[14:42] <kenvandine> looks like this has been broken for a while
[14:42] <Chipaca> kenvandine: the launchpad one was a missing core18
[14:42] <mvo> kenvandine: thank you!
[14:43] <Chipaca> kenvandine: what project should I assign the launchpad one to?
[14:43] <kenvandine> livecd-rootfs i guess
[14:43] <kenvandine> since according to this commit should be explicitly pulling in core18
[14:44] <Chipaca> https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500
[14:44] <Chipaca> done :)
[14:44] <diddledan> the commit is `- * core18` - the commit explicitly removed core18 from the list
[14:44] <kenvandine> right
[14:45] <kenvandine> but the commit message said they did that because livecd-rootfs explicitly adds it
[14:45] <Chipaca> kenvandine: what QA is done on these images? how did get to make a blog post about an image that was broken in this way?
[14:45] <Chipaca> kenvandine: talking about the hyperv ones in particular
[14:45] <kenvandine> we manually tested he hyperv images, so i don't know what happened there
[14:45] <kenvandine> Chipaca: remember i had run into a similar problem with this?
[14:46] <kenvandine> and we figured out what was needed to get the seed to work
[14:46] <Chipaca> kenvandine: yes which is why i assumed there was some sort of automated qa to catch it
[14:46] <kenvandine> when i was creating the 19.04 images
[14:46] <Chipaca> 18.05 already had these issues (from hwe peeps)
[14:46] <Chipaca> or was it 18.06
[14:46] <kenvandine> and it was working, so i'm really confused as to how what ended up on partner-images is broken
[14:46] <Chipaca> anyway, yeah
[14:46] <kenvandine> unless it wasn't the right image that was copied there
[14:47] <kenvandine> which could b
[14:47] <kenvandine> +e
[14:47] <kenvandine> at the time we were manually moving files around and had a series of broken images
[14:48] <kenvandine> we're just now getting a more formalized process as foundations is taking over creating those images
[14:54] <kenvandine> Chipaca: so i guess we should actually revert that commit and add core18 to the bionic seed?
[14:54] <diddledan> I can't find a corresponding commit (removing core18) for bionic branch - it looks like core18 was never added in the first place
[14:54] <kenvandine> right
[14:54] <kenvandine> it wasn't
[14:55] <kenvandine> livecd-rootfs was supposed to be handling that
[14:57] <diddledan> well it seemingly does for the downloadable isos.. somehow the hyper-v ones were different (unavoidably they need a separate spin because they have the xrdp changes)
[14:59] <pedronis> mvo: this is the pastebin I mentioned in the standup (about core18): https://pastebin.canonical.com/p/hVfzbMmwm2/
[15:01] <diddledan> bah @ private pastes :-p
[15:02] <diddledan> how's a nosey busybody gonna watch what ya'll up to if you keep it private ;-p
[15:02] <mvo> diddledan: haha - sorry for that, its just about testing stuff
[15:02] <diddledan> for "nosey busybody" read: diddledan
[15:04] <kenvandine> https://code.launchpad.net/~ken-vandine/ubuntu-seeds/+git/ubuntu/+merge/370061
[15:04] <kenvandine> https://code.launchpad.net/~ken-vandine/ubuntu-seeds/+git/ubuntu/+merge/370062
[15:04] <kenvandine> https://code.launchpad.net/~ken-vandine/ubuntu-seeds/+git/ubuntu/+merge/370063
[15:05] <diddledan> heh. launchpad is having a brainfart
[15:05] <diddledan> .. and it's back
[15:06] <diddledan> will need to make sure to check the images this produces that explicitly adding core18 doesn't somehow conflict with anything livecd-rootfs does
[15:07] <diddledan> I'm thinking about the non-hyper-v cases here where we had the right thing™ already
[15:08] <ackk> did something changed in recent snapds wrt /proc/PID/mounts?
[15:10] <kenvandine> diddledan: we didn't though, the latest 19.10 isos have the same problem
[15:10] <diddledan> oh ok..
[15:10] <kenvandine> it's not just hyperv
[15:10] <kenvandine> the released 19.04 and 18.04.2 isos were fine
[15:11] <diddledan> I figured they were fine 'cos bionic and cosmic seemingly did the right thing
[15:11] <zyga> ackk: what are you seeing?
[15:11] <diddledan> you need mount-observe IIRC?
[15:11] <ackk> zyga, denials on those files. not sure if they're causing issues, but I don't think they were there before
[15:12] <zyga> ackk: the mount table is a "mild information leak" so it was always denied without mount-observe AFAIR
[15:12] <ackk> zyga, I see
[15:12] <zyga> ackk: I think nothing has changed there recently
[15:14] <ackk> zyga, thanks
[15:17] <Chipaca> kenvandine: the latest 19.10 isos have a different problem, maybe, not the same
[15:17] <Chipaca> kenvandine: as they have core18, and they don't have core
[15:17] <kenvandine> ah... actually i think they wanted to drop core and add the snapd snap?
[15:18] <kenvandine> they tried that late in disco and found problems
[15:18] <kenvandine> Chipaca: so maybe attempting the same for eoan
[15:18] <kenvandine> mvo: ^^ do you recall that?
[15:18] <mvo> kenvandine: in a meeting right now
[15:18] <kenvandine> ok
[15:21] <Chipaca> kenvandine: that will only work if you have a model and the model has a base
[15:21] <kenvandine> ok, i know that's something they wanted to do.
[15:22] <Chipaca> kenvandine: given that we're getting «cannot proceed without seeding “core”»
[15:22] <Chipaca> kenvandine: then you're missing one of those two things :-)
[15:22] <Chipaca> s/you/we/
[15:22] <Chipaca> s/you/we/g
[15:25] <Chipaca> kenvandine: to be clear, you'd only get that message in three scenarios:
[15:25] <Chipaca> dammit!
[15:25] <Chipaca> kenvandine: to be clear, we'd only get that message in three scenarios:
[15:25] <Chipaca> 1. we don't have a model (wut)
[15:25] <Chipaca> 2. the model doesn't have a base
[15:25] <Chipaca> 3. the model has a base that is explicitly set to "core"
[15:26] <Chipaca> that's because that message comes from installSeedEssential
[15:26] <Chipaca> which happens before the rest of the seeding proceeds
[15:26] <kenvandine> https://git.launchpad.net/ubuntu/+source/livecd-rootfs/tree/live-build/functions?h=ubuntu/eoan#n434
[15:28] <Chipaca> kenvandine: yeah, the "is the model ok with this" isn't in the picture there is it
[15:28] <kenvandine> right
[15:29] <Chipaca> snap_prepare_assertions does look into the model, so maybe it can extract this info to feed into the seeding logi
[15:29] <Chipaca> but, dunno
[15:30] <Chipaca> anyway, imma go break things for a bit
[15:30]  * Chipaca takes a break
[15:32] <kenvandine> looks to me like the disco branch has the same logic, so expects to be able to seed snapd instead of core
[15:33] <kenvandine> but the bionic branch doesn't, it does explicitly seed core
[15:33] <Chipaca> kenvandine: I don't know what model we use, but all it would take would be for it to have a core18 base
[15:33] <Chipaca> pedronis: maybe you know ^
[15:33] <pedronis> Chipaca: no base doesnt make sense for classic images
[15:34] <pedronis> it's only for core images
[15:34] <Chipaca> pedronis: ?
[15:34] <pedronis> what I said
[15:34] <Chipaca> pedronis: 'snap known model' on classic ubuntu 16 says we don't have a base
[15:34] <pedronis> cannot put base: core18 and classic: true in a model
[15:34] <pedronis> Chipaca: true
[15:35] <pedronis> doesn't mean core is the base
[15:35] <pedronis> there's no root base for a classic system
[15:35] <Chipaca> d'oh, i missed trivialSeeding
[15:35] <Chipaca> 	configTs := snapstate.ConfigureSnap(st, "core", 0)
[15:35] <Chipaca> grr
[15:36] <Chipaca> kenvandine: classic always needs core
[15:36] <Chipaca> pedronis: thank you
[15:36] <Chipaca> pedronis: also maybe classic always needing core is a bug
[15:36] <pedronis> it doesn't quite need core
[15:36] <pedronis> that is config
[15:36] <Chipaca> sigh
[15:36] <Chipaca> pedronis: hol' up
[15:36] <Chipaca> pedronis: there's only one place that prints that error message
[15:36] <pedronis> which error message?
[15:36] <Chipaca> pedronis: so we're obviously hitting that code path
[15:37] <Chipaca> pedronis: cannot proceed without seeding “core”
[15:37] <pedronis> Chipaca: to be clear, I'm not saying there are no bugs
[15:37] <Chipaca> :-)
[15:37] <pedronis> just makign clear that setting a base
[15:37] <pedronis> is not the answer
[15:37] <Chipaca> yeah, i got you so far
[15:38] <Chipaca> ah, we only do trivial seeding if _no_ seed is there
[15:38] <Chipaca> otherwise we do seeding as normal
[15:38] <Chipaca> and, given you can't set a base for a classic model
[15:38] <Chipaca> you'll always require core
[15:38] <pedronis> atm yes
[15:38] <Chipaca> which means adding snapd makes no sense
[15:39] <Chipaca> so that code in livecd-rootfs-wotsit is wrong, or at least ahead of its time
[15:39] <Chipaca> kenvandine: ^
[15:39] <kenvandine> ok
[15:39] <pedronis> Chipaca: we don't have tests about this
[15:39] <pedronis> classic with just snapd
[15:39] <pedronis> that's why is not worky
[15:39] <Chipaca> pedronis: well, we do now, in the shape of cd images in the wild
[15:40] <Chipaca> :)
[15:40] <Chipaca> rather expensive tests
[15:40] <pedronis> I wouldn't call it
[15:40] <pedronis> us having a test
[15:40] <pedronis> more, there is a test
[15:40] <Chipaca> make your 'we' bigger and you'll get there
[15:40] <Chipaca> :-p
[15:40] <pedronis> also why livece-rootfs shouldn't be in the game
[15:40] <pedronis> for 2nd guessing snapd
[15:42] <pedronis> Chipaca: anyway just a matter of having a variant tests/main/classic-firstboot/task.yaml
[15:42] <pedronis> and fixing first boot to do sane things
[15:45] <pedronis> Chipaca: relatedly, this is still wip: https://github.com/snapcore/snapd/pull/6404
[15:50] <Chipaca> I'm going afk for a while, will be back later to EOW properly
[15:55]  * pstolowski pstolowski|afk
[17:58] <Chipaca> you know it's time to EOW when you're having to up the font size of everything
[17:58] <Chipaca> have an excellent weekend, all!
[17:58] <zyga> Chipaca: heh :)
[17:59] <zyga> Chipaca: see you after next week
[17:59] <zyga> Chipaca: enjoy the quiet days
[17:59] <Chipaca> Eighth_Doctor: still waiting on where to send pizza etc
[17:59] <zyga> (when everything is on fire and nobody picks up the phone ;-)
[17:59] <Chipaca> Eighth_Doctor: (but reach me on twitter :) )
[17:59] <Chipaca> zyga: we'll give everybody your tg contact info, no fear
[17:59] <zyga> Chipaca: excellent, I'll be somewhere between here and poland