[05:20] morning === morphis6 is now known as morphis [06:38] mvo: morning [06:39] hey mborzecki === pstolowski|afk is now known as pstolowski [07:03] morning [07:07] pstolowski: hey [07:13] good morning pstolowski [07:14] pstolowski: 7016 has two +1, can I squash it and cherry pick it to 2.40? [07:14] https://github.com/snapcore/snapd/pull/7093 can land easily [07:15] pstolowski: also see my last comment, AIUI the sublevel patch is applied not more often now than before? or am I misreading something? [07:15] mvo: yes! thank you [07:15] pstolowski: thank *you* [07:36] passing a suite selector with ... on spread command line, those are nor ORed apparently [07:38] duh, manual: true :/ [07:41] Hey, trying to connect from my laptop but fighting network issue [07:41] Somehow cannot tether [08:05] mborzecki: reviewed again 6890, it needs a 2nd review, John gave it but has changed a lot since. [08:06] pedronis: great, i'll ping Chipaca when he's around [08:41] re [08:41] yay [09:38] hi, is there a way for a (confined) snap to know the channel it's tracking? [09:39] ackk: no, but we discussed adding that via snapctl [09:39] ackk: perhaps pstolowski knows this? [09:41] I don't remember discussing this in particular [09:42] yep, as pedronis says [09:42] we mainly discussed: checking whether an interface is connected, check whether there is new revision, and very recently checking whether there's a missed/pending auto-refresh because an app was running [09:43] I think is not totally out of question to return this info in the 2nd of that list [09:43] but I would like to understand the use case [09:44] pedronis, in maas' case, we have an option to deploy a machine as a maas rackcontroller, which installs the snap. ideally we want the same version of the currently running maas. of course there's no guarantee that the channel doesn't have a newer revision, but pointing the new machine to the same channel is the best option [09:45] ackk: you want to use cohorts for that [09:45] that's what they are designed for [09:45] pedronis, how do they work? [09:45] make sure a cluster always uses the same revision [09:46] ackk: https://forum.snapcraft.io/t/managing-cohorts/8995 they will be in 2.40 [09:46] * ackk reads up [09:47] pedronis, so the idea is you create a cohort on machine A ("pinning" the version of maas), deploy machine B and install maas using the cohort, delete the cohort? [09:48] ackk: yes, you don't delete cohorts though [09:48] they don't take space in the store [09:48] pedronis, but they will prevent both machines for getting updated if a new snap is released right? [09:48] pedronis: we did discuss checking which channel is being tracked in lyon [09:49] zyga: I don't remember, maybe in a conversation I was not in, or maybe it was before I arrived [09:49] pedronis: perhaps in a hallway but I'm sure we had that conversation [09:49] I don't recall that detail [09:49] ackk: they will prevent the machine to get updated for 90 days, then they update but they will again update to the same revisions [09:50] pedronis, ok. so if we don't want to hold the update, just to get them on the same channel, deleting is fine. correct? [09:50] ackk: deleting what? [09:50] pedronis, the cohort after installing machine B [09:50] ackk: there is not deleting of cohorts [09:51] sorry, i mean, leaving it [09:51] ackk: no, if you leave it [09:51] they will potentially go out of sync [09:52] if you want to refresh you do the same [09:52] get a new cohort [09:52] on a machine [09:52] and refresh again on the others using that cohort [09:52] mborzecki: found one more leaky test https://github.com/snapcore/snapd/pull/7091/commits/bace705b928d065a61870cf16953b652ad5c576a [09:53] pedronis, but you have to leave the cohoort to refresh, right? [09:53] ackk: no [09:53] zyga ping [09:54] hey ondra [09:54] acck: you do snap refresh --cohort= [09:54] zyga I have machine which just went into funny state [09:54] [09:54] zyga opengrok-2 opengrok.tomcat[3188]: cannot locate base snap core18: No such file or directory [09:54] zyga ut [09:54] zyga it's classic ubuntu 18.04 [09:54] ackk: you can switch from cohort to cohort without leaving using those ops [09:54] ondra: is core18 mounted? [09:54] pedronis, oh, I see, I thought create-cohort would be related to what you have, I see it's not [09:55] ondra: mborzecki reported weird system state after systemd update today [09:55] zyga it has not rebooted for weeks [09:55] zyga it's mounted, but there is not sym link to 'current' [09:55] ondra: but systemd updates in place [09:56] ondra: oh, that's curiour [09:56] ondra: looked like systemd lost track of loopback devices and where they were mounted at after it was daemon-reexec'ed during the update [09:56] ondra: can you check snap changes? [09:56] mvo: ^^ [09:56] pedronis, I see, thanks [09:56] mvo: current went away on ondra's machine [09:56] zyga yep 86 Error today at 07:04 UTC today at 07:04 UTC Auto-refresh snap "core18" [09:57] zyga https://paste.ubuntu.com/p/G4rT485kFr/ [09:57] zyga hmm Error today at 07:04 UTC today at 07:04 UTC Make current revision for snap "core18" unavailable [09:58] zyga I did not. update systemd for ages there [09:58] ondra: can you show snap tasks for that change please? [09:58] zyga this machine is running web service, so I did not touch it for several weeks/months [09:59] ondra: some updates auto apply via security pocket [09:59] zyga do you mean that paste bin I sent? [09:59] oh, looking [09:59] ondra: your fs is broken [09:59] ondra: is / writable? [09:59] zyga it's classic ubuntu [10:00] ondra: is it writable [10:00] looks like it switched to read only mode [10:00] zyga looks like [10:00] I was able to touch file in home [10:00] zyga and root [10:01] actually, the error is curious [10:01] https://www.irccloud.com/pastebin/CEml5ETz/ [10:01] er [10:02] that's the $SNAP_DATA data-set [10:02] is /var/snap mounted on another partition>? [10:02] zyga it is :) [10:03] is that partition read only? [10:03] zyga nope writable [10:04] ondra: then I don't know [10:04] ondra: check system logs for around 2019-07-12T07:04:46Z [10:04] maybe something happened then? [10:06] zyga no I could only see there that read only error [10:07] zyga so manual 'snap refresh core18' went fine and fixed it [10:07] ondra: which cloud provider is this? [10:11] pstolowski: thanks for 7096! really nice to see how straightforward this is now [10:11] mvo: it became so much simplier with nulls internally [10:12] pstolowski: cool stuff! [10:12] i could throw away a large chunk of the new code [10:13] zyga you do not want to know, canonistack :D [10:13] zyga I blame it on cosmic rays [10:13] pstolowski: with https://github.com/snapcore/snapd/pull/7096/files we could look at tests that set stuff to unset it [10:14] I have a few for sure [10:14] thanks for that! [10:17] pstolowski: https://github.com/snapcore/snapd/pull/7096#pullrequestreview-261176295 [10:18] zyga: thanks! [10:20] pedronis: fwiw, fyi, https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1828500 [10:21] pstolowski: one more https://github.com/snapcore/snapd/pull/7096/files#r302922240 [10:21] Chipaca: we need to loop in whoever is making those images [10:22] pedronis: loop in, yes. With steel wire around . [10:22] pedronis: assuming my suspicion is correct [10:22] which I suspect it is :-) [10:22] zyga: right, thanks! [10:22] anyhow, time for a break [10:22] * Chipaca breaks things [10:27] this is interesting https://forum.snapcraft.io/t/too-early-for-operations/12243/8 [10:27] shouldn't core be listed there? [10:28] mborzecki: yes [10:28] whoever makes that seed needs to fix it [10:29] hm that's 19.10 :) at least it hasn't been released yet [10:30] mvo: do you know how seed.yaml is generated for those images? [10:31] mborzecki: perhaps manually [10:31] mborzecki: many people don't know about snap prepare-image and use a page full of shell to do the "equivalent" [10:31] given that there are on snap IDs there I bet it is the shell versioni [10:32] mborzecki: what image exactly? [10:33] mvo: see the topic, someone installed 19.10 from a daily image, seed.yaml looks broken [10:33] mborzecki: :( I think those seeds are generated "manually" by some livecd-rootfs shell. we can ask foundations (cc sil2100) how 19.10 daily image seed.yaml is generated and if it can be fixed [10:34] we are trying to move them to prepare-image --classic [10:34] but given the complexity of livecd-rootfs, is not trivial [10:40] Chipaca: I just added the seed.yaml from a hyper-v image [10:40] ref: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1828500 [10:46] diddledan: thanks! could you do the followup, ie sudo snap info --verbose /var/lib/snapd/seed/snaps/*.snap | grep base: ? [10:48] done [10:49] diddledan: who creates these images, do you know? [10:49] I don't [10:50] looks like willcooke might know as he posted the blog entry about them https://ubuntu.com/blog/optimised-ubuntu-desktop-images-available-in-microsoft-hyper-v-gallery [10:51] willcooke: can we talk about QAing stuff :/ [10:52] I can try the hyper v image later if anyone needs me to [10:53] does snapd wait for the user to login before doing any pre-seed? I'm wondering if the way those images logs you in is confusing matters - it uses XRDP over a Hyper-V internal socket [10:54] diddledan: no it does not [10:54] ok, ignore that then :-p [10:54] * diddledan throws more random at the wall until stuff sticks :-D [10:54] diddledan: it's simple, the seed.yaml needs to be self-contained [10:54] Chipaca, it has been working previously, because things like calc are snaps. I have a feeling we have talked about this problem previously, but kenvandine will need to comment further. [10:54] diddledan: if a snap in seed needs core18, core18 needs to be in the seed [10:55] aha. so the seed.yaml is missing core18! [10:55] willcooke: calc and things have needed core18 since shortly after 18.04 [10:55] willcooke: those images have gone out, and we blogged about them, without them working: the _seeded_ snaps have core18 [10:55] that makes sense, because core did get installed by the pre-seed [10:55] this isn't a weird things-got-updated-after-image-baking, the image is baked and has not been tested [10:56] willcooke: thus me asking about QA [10:57] willcooke: you have the same issue in dailies fwiw, if that's also 'under' you [10:57] willcooke: https://forum.snapcraft.io/t/too-early-for-operations/12243?u=chipaca [10:58] (i'm less concerned about these because those are explicitly not qa'd) === ricab is now known as ricab|brb [11:00] Chipaca, kenvandine knows about this, he'll be online in an hour or so I expect [11:01] ok [11:01] willcooke: both of these? [11:02] Chipaca, yes, I think it's the same problem, and I have a feeling that he's spoken to someone in your team about this recently. I thought he'd worked around it, but it would seem not [11:03] willcooke: I'll wait and talk with kenvandine when they're around, then [11:03] my view is very snapd-centric but I don't see what there's to workaround :-) [11:07] I've been seeing a weird behavior recently. At times some snaps (maybe it's actually just core18) show up in nautilus as removable devices. If I click on it it gets mounted under /media and I see the content, but the file is not actually there: https://paste.ubuntu.com/p/9DMCxHR3RN/ [11:09] * zyga sees that as more rationale to mount snaps in dedicated namespaces only [11:10] zyga, I don't understand how nautilus finds it, the file is not there but I can mount/umount it multiple times and it actually works [11:11] ackk: nautilus probably tracks all mount ops [11:14] zyga, yeah but the snap is gone, and it's not mounted anymore [11:14] ackk: wait until you notice using tab completion on snap makes devices flicker in nautilus [11:14] ackk: *that* will do your head in [11:14] Chipaca: whaaat? [11:14] wat? [11:15] not all the time, sadly [11:15] zyga: rawhide cloud image https://paste.ubuntu.com/p/FTTpjJv2Hj/ hm doesn't look like only v2 === ricab|brb is now known as ricab [11:15] mborzecki: perhaps not switched by default yet, the plan is to by release though [11:16] zyga: mhm [11:19] zyga: ok, on cgroup2 now only [11:19] mborzecki: woot [11:19] mborzecki: I'm fixing leaky tests [11:19] mborzecki: found a good way now [11:19] zyga: i mean the image is switched to unified hierarchy https://paste.ubuntu.com/p/YzcFg4rD7z/ ;) [11:20] mborzecki: ? [11:20] mborzecki: did you switch yourself [11:20] mborzecki: or did you update? [11:20] mborzecki: I like one thing about v2 [11:20] mborzecki: run mount [11:20] mborzecki: see how short that is? :D [11:20] zyga: switched it myself, this the yesterday's compose [11:21] zyga: shorter but not super short :) [11:22] mborzecki: how long is it? [11:25] zyga: https://paste.ubuntu.com/p/sbfygMW6Hr/ [11:25] * zyga sees more bpf [11:26] heh [11:27] poeple seem use some automation to track upstream releases for aur packages, mvo uploaded a release 2h go, 40 minutes ago got an email from aur that the package has been flagged already [11:31] god, why is apt removing lxd all the time [11:31] mvo: how can I check that a package is installed vs auto-installed [11:41] zyga: apt show lxd [11:41] zyga: watch out for "APT-Manual-Installed: yes [11:41] " [11:41] mvo: thank you [11:41] mvo: tests are wreaking havoc to lxc preinstalled in cloud image [11:41] mvo: because unrelated apt-get autoremove other-package # removes lxd [11:43] fun [11:43] apt install xld [11:43] apt install lxd [11:43] will set lxd to manual installed [11:43] yeah, I'm just double checking the initial state [11:43] the qemu images don't have lxd [11:43] but cloud images do [11:44] * pstolowski lunch [11:45] 7k closed PRs [11:46] * mvo is impressed [11:55] mvo: so, I don't see the manual flag [11:55] mvo: but apt autoremove doesn't remove it [11:56] https://www.irccloud.com/pastebin/qI2YEpuO/ [11:58] Chipaca: also https://forum.snapcraft.io/t/too-early-for-operations/12243/ is a daily 19.10 image [11:58] hmmmm [11:59] gosh [11:59] what is making this broken :. [12:01] zyga: which thing? [12:08] Chipaca: tests installing and removing packages [12:08] I don't understand it yet [12:08] but some remove operations remove lxd [12:08] why, don't know [12:08] fighting spread to give me shell before changing the image much [12:09] usability of spread is sometimes lacking [12:10] zyga: that means its used by something [12:11] mvo: is that apt show line going to show me this information reliably, that it is auto-installed? [12:11] mvo: I want to understand what is breaking the state (changing it from manual to auto) [12:11] * zyga is extremely fed up with the heat [12:12] zyga: maybe we can have a quick chat ? I'm not sure I have enough context but there are a bunch of debug option available to figure out why something gets removed [12:12] zyga: do you have a link to a log or something? [12:12] mvo: I'm tired, need a break [12:12] mvo: I will look at this later [12:14] ackk: not sure you say from yesterday: 17:24 < jdstrand> ackk: fyi, https://people.canonical.com/~jamie/snapd-daemon-user/ [12:14] zyga: yeah, lets just talk after the standup [12:20] zyga: broken snaps looks like a larger problem with systemd/5.2 kernel [12:20] zyga: just got the same state after a reboot [12:22] zyga: https://paste.ubuntu.com/p/QMH66ZXcNW/ [12:24] arch package updated [12:25] drat, late for lunch [12:25] * Chipaca runs [12:49] * Chipaca om noms [12:50] * Chipaca is on fire [12:50] * Chipaca makes a note to use fewer chillies next time [13:01] ugh... [13:01] * Chipaca omw [13:01] https://twitter.com/hughsient/status/1149663770026733571 [13:01] https://blogs.gnome.org/hughsie/2019/07/12/gnome-software-in-fedora-will-no-longer-support-snapd/ === ricab_ is now known as ricab|lunch [13:03] I was particularly perturbed by him saying we're at war [13:08] "One ISO consortium member asks whether they should remove hydrogen engine support from the ISO car as they feel BMW is not playing fair." <-- that's the crux, it is being used as a punishment for daring to consider alternatives [13:10] and let's be clear. the investigation is barely 20 days old [13:15] diddledan: imo, having something that isn't g-s for the non-gnome flavors would be useful anyway [13:15] right now, there's nothing available for DEs that don't have their own software management frontend [13:22] like MATE, for example [13:24] diddledan: I'm not happy that I'm being caught in the crossfire though :( [13:27] zyga: added a topic about mount problems with 5.2 I see here: https://forum.snapcraft.io/t/snap-mounts-broken-with-kernel-5-2-and-systemd-242/12272 [13:28] thank you, looking [13:31] Eighth_Doctor: is there anything i can do to help [13:31] cuddles? [13:31] Eighth_Doctor: even if it's just something like, send you pizza or flowers or beer or something [13:31] Chipaca: I don't know [13:31] haha [13:31] mborzecki: thank you for reminding me about 2.40; I'll update openSUSE and look at Debian [13:32] zyga: is 2.40 out now? [13:32] Eighth_Doctor: yes [13:32] joy... [13:32] https://snapcraft.io/core [13:33] alright, since hughsie is being a jerk, I guess I have to package gnome-software-snap separately [13:34] Chipaca: what makes this situation worse is that he's trapped me with the snap store thing [13:34] I don't know if I dare bring up this to FESCo or the Council, because he might try to use it to get snapd removed from the distro entirely [13:34] * diddledan wants 2.41 already :-p ('cos it'll have my change from https://github.com/snapcore/snapd/pull/6876) [13:35] Chipaca: he put that in the changelog entry for justification: https://src.fedoraproject.org/rpms/gnome-software/c/ef1746da8c7613c6bd29d9b9bbfbe95c4291bc85 [13:37] Eighth_Doctor: *hugs* [13:37] surely all this throwing of toys is just gonna force Ubuntu to not use gnome software ever again? [13:37] to me, it looks like he's just unhappy he can't have flathub [13:38] that's what it seems like to me, but he says he was already told Ubuntu wasn't going to [13:39] I think I've handled this fairly well, all things considered: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/O4CMUKPHMMJ5W7OPZN2E7BYTVZWCRQHU/ === msalvatore_ is now known as msalvatore [13:45] Eighth_Doctor: oh man [13:45] Eighth_Doctor: changelog entry [13:46] zyga: yup [13:46] Eighth_Doctor: yeah, it feels like a lot of hurt feelings now [13:46] mvo: btw #6977 needs your re-review [13:47] Eighth_Doctor: thank you for participating in that discussion, I feel that it is unfortunate it ended up like that and that some heat-of-the-moment-commit-messages will be regretted eventually [13:47] but what's done is done [13:47] pstolowski: aha, nice - thank you! [13:48] zyga: I guess I'm packaging gnome-software-snap separately now [13:48] god damn all of this [13:49] /o\ [13:49] Eighth_Doctor: hopefully it's smooth as long as g-s doesn't break the a[pb]i [13:49] Eighth_Doctor: sorry for the trouble [13:49] mborzecki: ABI breaks every release [13:50] that's why no one does out of tree plugins [13:55] https://twitter.com/hughsient/status/1149677435933224960 === ricab|lunch is now known as ricab [14:12] * cachio afk [14:40] mvo: https://git.launchpad.net/~ubuntu-core-dev/ubuntu-seeds/+git/ubuntu/commit/?h=eoan&id=6f3707c8dd8fdc7c4c0ca1eeff428fa0e4678db6 [14:41] \o/ [14:41] awesome, kenvandine :-) [14:41] kenvandine: note you also need to have 'core' [14:42] Chipaca: iirc livecd-rootfs pulls in core explicitly [14:42] dunno if the new logic thing does that [14:42] or was [14:42] right [14:42] you need to have both now [14:42] kenvandine: yesterday's daily did not [14:42] right [14:42] kenvandine: that's what the forum issue was about [14:42] looks like this has been broken for a while [14:42] kenvandine: the launchpad one was a missing core18 [14:42] kenvandine: thank you! [14:43] kenvandine: what project should I assign the launchpad one to? [14:43] livecd-rootfs i guess [14:43] since according to this commit should be explicitly pulling in core18 [14:44] https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500 [14:44] done :) [14:44] the commit is `- * core18` - the commit explicitly removed core18 from the list [14:44] right [14:45] but the commit message said they did that because livecd-rootfs explicitly adds it [14:45] kenvandine: what QA is done on these images? how did get to make a blog post about an image that was broken in this way? [14:45] kenvandine: talking about the hyperv ones in particular [14:45] we manually tested he hyperv images, so i don't know what happened there [14:45] Chipaca: remember i had run into a similar problem with this? [14:46] and we figured out what was needed to get the seed to work [14:46] kenvandine: yes which is why i assumed there was some sort of automated qa to catch it [14:46] when i was creating the 19.04 images [14:46] 18.05 already had these issues (from hwe peeps) [14:46] or was it 18.06 [14:46] and it was working, so i'm really confused as to how what ended up on partner-images is broken [14:46] anyway, yeah [14:46] unless it wasn't the right image that was copied there [14:47] which could b [14:47] +e [14:47] at the time we were manually moving files around and had a series of broken images [14:48] we're just now getting a more formalized process as foundations is taking over creating those images [14:54] Chipaca: so i guess we should actually revert that commit and add core18 to the bionic seed? [14:54] I can't find a corresponding commit (removing core18) for bionic branch - it looks like core18 was never added in the first place [14:54] right [14:54] it wasn't [14:55] livecd-rootfs was supposed to be handling that [14:57] well it seemingly does for the downloadable isos.. somehow the hyper-v ones were different (unavoidably they need a separate spin because they have the xrdp changes) [14:59] mvo: this is the pastebin I mentioned in the standup (about core18): https://pastebin.canonical.com/p/hVfzbMmwm2/ [15:01] bah @ private pastes :-p [15:02] how's a nosey busybody gonna watch what ya'll up to if you keep it private ;-p [15:02] diddledan: haha - sorry for that, its just about testing stuff [15:02] for "nosey busybody" read: diddledan [15:04] https://code.launchpad.net/~ken-vandine/ubuntu-seeds/+git/ubuntu/+merge/370061 [15:04] https://code.launchpad.net/~ken-vandine/ubuntu-seeds/+git/ubuntu/+merge/370062 [15:04] https://code.launchpad.net/~ken-vandine/ubuntu-seeds/+git/ubuntu/+merge/370063 [15:05] heh. launchpad is having a brainfart [15:05] .. and it's back [15:06] will need to make sure to check the images this produces that explicitly adding core18 doesn't somehow conflict with anything livecd-rootfs does [15:07] I'm thinking about the non-hyper-v cases here where we had the right thing™ already [15:08] did something changed in recent snapds wrt /proc/PID/mounts? [15:10] diddledan: we didn't though, the latest 19.10 isos have the same problem [15:10] oh ok.. [15:10] it's not just hyperv [15:10] the released 19.04 and 18.04.2 isos were fine [15:11] I figured they were fine 'cos bionic and cosmic seemingly did the right thing [15:11] ackk: what are you seeing? [15:11] you need mount-observe IIRC? [15:11] zyga, denials on those files. not sure if they're causing issues, but I don't think they were there before [15:12] ackk: the mount table is a "mild information leak" so it was always denied without mount-observe AFAIR [15:12] zyga, I see [15:12] ackk: I think nothing has changed there recently [15:14] zyga, thanks [15:17] kenvandine: the latest 19.10 isos have a different problem, maybe, not the same [15:17] kenvandine: as they have core18, and they don't have core [15:17] ah... actually i think they wanted to drop core and add the snapd snap? [15:18] they tried that late in disco and found problems [15:18] Chipaca: so maybe attempting the same for eoan [15:18] mvo: ^^ do you recall that? [15:18] kenvandine: in a meeting right now [15:18] ok [15:21] kenvandine: that will only work if you have a model and the model has a base [15:21] ok, i know that's something they wanted to do. [15:22] kenvandine: given that we're getting «cannot proceed without seeding “core”» [15:22] kenvandine: then you're missing one of those two things :-) [15:22] s/you/we/ [15:22] s/you/we/g [15:25] kenvandine: to be clear, you'd only get that message in three scenarios: [15:25] dammit! [15:25] kenvandine: to be clear, we'd only get that message in three scenarios: [15:25] 1. we don't have a model (wut) [15:25] 2. the model doesn't have a base [15:25] 3. the model has a base that is explicitly set to "core" [15:26] that's because that message comes from installSeedEssential [15:26] which happens before the rest of the seeding proceeds [15:26] https://git.launchpad.net/ubuntu/+source/livecd-rootfs/tree/live-build/functions?h=ubuntu/eoan#n434 [15:28] kenvandine: yeah, the "is the model ok with this" isn't in the picture there is it [15:28] right [15:29] snap_prepare_assertions does look into the model, so maybe it can extract this info to feed into the seeding logi [15:29] but, dunno [15:30] anyway, imma go break things for a bit [15:30] * Chipaca takes a break [15:32] looks to me like the disco branch has the same logic, so expects to be able to seed snapd instead of core [15:33] but the bionic branch doesn't, it does explicitly seed core [15:33] kenvandine: I don't know what model we use, but all it would take would be for it to have a core18 base [15:33] pedronis: maybe you know ^ [15:33] Chipaca: no base doesnt make sense for classic images [15:34] it's only for core images [15:34] pedronis: ? [15:34] what I said [15:34] pedronis: 'snap known model' on classic ubuntu 16 says we don't have a base [15:34] cannot put base: core18 and classic: true in a model [15:34] Chipaca: true [15:35] doesn't mean core is the base [15:35] there's no root base for a classic system [15:35] d'oh, i missed trivialSeeding [15:35] configTs := snapstate.ConfigureSnap(st, "core", 0) [15:35] grr [15:36] kenvandine: classic always needs core [15:36] pedronis: thank you [15:36] pedronis: also maybe classic always needing core is a bug [15:36] it doesn't quite need core [15:36] that is config [15:36] sigh [15:36] pedronis: hol' up [15:36] pedronis: there's only one place that prints that error message [15:36] which error message? [15:36] pedronis: so we're obviously hitting that code path [15:37] pedronis: cannot proceed without seeding “core” [15:37] Chipaca: to be clear, I'm not saying there are no bugs [15:37] :-) [15:37] just makign clear that setting a base [15:37] is not the answer [15:37] yeah, i got you so far [15:38] ah, we only do trivial seeding if _no_ seed is there [15:38] otherwise we do seeding as normal [15:38] and, given you can't set a base for a classic model [15:38] you'll always require core [15:38] atm yes [15:38] which means adding snapd makes no sense [15:39] so that code in livecd-rootfs-wotsit is wrong, or at least ahead of its time [15:39] kenvandine: ^ [15:39] ok [15:39] Chipaca: we don't have tests about this [15:39] classic with just snapd [15:39] that's why is not worky [15:39] pedronis: well, we do now, in the shape of cd images in the wild [15:40] :) [15:40] rather expensive tests [15:40] I wouldn't call it [15:40] us having a test [15:40] more, there is a test [15:40] make your 'we' bigger and you'll get there [15:40] :-p [15:40] also why livece-rootfs shouldn't be in the game [15:40] for 2nd guessing snapd [15:42] Chipaca: anyway just a matter of having a variant tests/main/classic-firstboot/task.yaml [15:42] and fixing first boot to do sane things [15:45] Chipaca: relatedly, this is still wip: https://github.com/snapcore/snapd/pull/6404 [15:50] I'm going afk for a while, will be back later to EOW properly [15:55] * pstolowski pstolowski|afk [17:58] you know it's time to EOW when you're having to up the font size of everything [17:58] have an excellent weekend, all! [17:58] Chipaca: heh :) [17:59] Chipaca: see you after next week [17:59] Chipaca: enjoy the quiet days [17:59] Eighth_Doctor: still waiting on where to send pizza etc [17:59] (when everything is on fire and nobody picks up the phone ;-) [17:59] Eighth_Doctor: (but reach me on twitter :) ) [17:59] zyga: we'll give everybody your tg contact info, no fear [17:59] Chipaca: excellent, I'll be somewhere between here and poland