=== giraffe is now known as Guest33711 [01:47] bluesabre: I'd suggest talking to infinity, as I believe he had the context of previous iterations here === DropItLikeItsHot is now known as PityDaFool === zyga_ is now known as zyga [09:28] I am trying to bring some packages from ubuntu to debian. [09:30] downloaded the source. renamed it _ in place of - and orig before format. checked if all necessary files and folder are present in /debian. then a mk-build-deps -i debian/control. finally dpkg-buildpackage [09:31] Am I missing any steps? as deb file is created and able to install too. but I dont see anything about the application. So I think something is wrong. [09:31] * Faux points at the topic, about how this is off-topic. [09:35] faux : Okay. Can you point me to right irc channel to ask this question? [09:36] There's some packaging channels for debian on otfc, but I don't know, no. [09:36] Thank you. [09:40] #debian-mentors I'd think, if the goal is to get it into Debian proper. [10:02] hmm, I just now realized that the autopkgtest of chrony doe reach out to github [10:02] I realized by that failing since end of may (but formerly working) [10:02] is that a temporary thing or did we shut down more outbound connections intentionally [10:02] I know it is not recommended to reach out, but as I said I wasn't even aware until now [10:03] so I need understand priorities and that requires to see what changed in the test-infra [10:10] cpaelzer: There was a bug in cloud-init or something where the proxy environment wasn't being set up properly; try retrying one arch and see if it works now [10:10] * cpaelzer just got some hope to not have to SRU this evywhere just for the test :-) [10:10] thanks Laney will try [10:14] ubuntu@juju-prod-ues-proposed-migration-machine-11:~$ ssh -o StrictHostKeyChecking=no ubuntu@10.44.40.9 cat /etc/environment [10:14] PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games" [10:14] http_proxy=http://squid.internal:3128 [10:14] https_proxy=http://squid.internal:3128 [10:14] no_proxy=127.0.0.1,127.0.1.1,localhost,localdomain,novalocal,internal,archive.ubuntu.com,security.ubuntu.com,changelogs.ubuntu.com,ddebs.ubuntu.com,ppa.launchpad.netbash: warning: setlocale: LC_ALL: cannot change locale (en_GB.UTF-8) [10:14] that looks Ok at least [10:14] seems to work now, at least the log snippet I see in autopkgtest/running seems to be after the fail I had [10:15] :> [10:15] will check them and if working retrigger all of them to avoid several package maintainers needin to realize whats going on in their proposed-migration [10:16] "Cloning into 'clknetsim'..." LGTM now [10:17] great [10:17] thanks for the hint Laney [10:18] * cpaelzer is shy to just hit retry buttons all over the place without any idea why it should be better [10:18] That's good, usually if retrying works it means the test is flaky in some way and should be improved anyway [10:18] (in this case it was a bug in the base image though) [10:18] yep === mhcerri_ is now known as mhcerri [11:44] can I do in d/control something like [11:44] Architecture: linux-any, !ppc64el, !s390x [11:45] cpaelzer, nope [11:45] cpaelzer, it's a space separated field i believe, additive only. [11:46] https://www.debian.org/doc/debian-policy/#s-arch-spec [11:46] bah no [11:46] this [11:46] https://www.debian.org/doc/debian-policy/#s-arch-spec [11:46] https://www.debian.org/doc/debian-policy/#architecture [11:46] smoser, cyphermox: looking at initramfs-tools in the SRU queue. I can guess at why you're SRUing it, but I see no mention of what you're actually fixing in the linked bug? The bug looks like everything is already resolved. [11:47] I think you either need to extend the bug to also cover the case you're fixing here, or use a different bug. [11:47] xnox: thanks, so to achive the above I have to extrapolate all potential values that Debian concerns about [11:47] and then drop ppc and s390x [11:47] cpaelzer, yeah this describes what you can do https://www.debian.org/doc/debian-policy/#architecture [11:48] Oh, wait. [11:48] You're additionally doing this for Xenial and Artful? [11:48] cpaelzer, sadly yes. we had to do that for like open mpi [11:48] * rbasak is quite confused [11:52] xnox: I guess I don't need to list more than thos elisted in https://www.debian.org/releases/stable/i386/ch02s01.html.en [11:52] because dpkg-architecture -L is an uncfortably long list [11:52] om [11:52] :-) [11:53] it is so uncomfortable that is misses chars [11:55] cpaelzer, debian arches + ubuntu arches + debian ports (optional) - broken [11:55] cpaelzer, so clicking buildd logs on tracker.debian.org is a good list https://buildd.debian.org/status/package.php?p=systemd [11:56] cpaelzer, so skip hurd & kfreebsd; but keep all others, unless you know it to be broken. so something like 16 arches to list, no? [11:57] amd64 arm64 armel armhf i386 mips mips64el mipsel alpha hppa ia64 m68k riscv64 sh4 sparc64 x32 [11:57] sil2100 hi, for lp #1774120 I asked slashd to review and sponsor my cosmic debdiff, as I think it addresses your concerns; please let me/him know soon if it doesn't. I'll upload to SRUs once it's in cosmic. [11:57] Launchpad bug 1774120 in ebtables (Ubuntu Bionic) "ebtables cannot be upgraded from 2.0.10.4-3.5ubuntu2 to 2.0.10.4-3.5ubuntu2.18.04.1 on WSL" [Low,In progress] https://launchpad.net/bugs/1774120 [11:58] xnox: yeah going through the debian buildds is good for my case [11:58] thanks for that hint [11:58] I can sort arch by arch which one has actually built the .so I look for [12:05] ddstreet, sil2100 - i'd rather not upload that ebtables SRU, and even possibly revert the fix in ebtables. The bug is not with ebtables, or its init.d script, but that init.d script was called at all in environment without PID 1. [12:05] commented on the bug as such. [12:05] rbalint, ^^^^ [12:07] rbasak, slangasek - should we change /lib/lsb/init-functions.d/40-systemd to have _use_systemctl=1, always?! [12:07] tsimonq2, yes, I offered up to blindly merge it if ddstreet prepares something I can just git merge cleanly. [12:08] because it became clear I am not going to review it, nor anybody else will… [12:08] rbasak, slangasek - i guess the downside there is, that currently some daemons are possible to be started via init.d scripts on WSL; and with that change in place, it will be hard to do so. [12:12] xnox i think that's a different bug actually, you're talking about the debhelper bug right? [12:12] ddstreet, no. [12:13] ddstreet, executing "service foo stop" calls /etc/init.d/foo stop, which sources /lib/lsb/init-functions, which sources /lib/lsb/init-functions/40-systemd, and by default redirects the call to systemctl stop foo. Such that init.d scripts are executed and managed as systemd units. [12:13] ddstreet, the redirection does not happen on WSL; but does on more normal Ubuntus. [12:14] (lxd, lxc, kvm, baremetal, etc) [12:14] ok, but that isn't what's causing the stop to fail [12:14] but sure, if you have a better way to fix it, please do [12:14] ddstreet, well, at that point when 40-systemd is sourced, it would realise that there is no systemd, and the script would bail / exit 0. [12:15] ddstreet, meaning that stop) portions of init.d script are never executed, init.d script is innert, and thus does not fail. [12:15] (ditto for start) [12:16] ddstreet, the bits that fail, should not have been attempted to be executed in the first place - on wsl. Just like they are not executed in chroots. [12:16] are you adding that as part of the debhelper/systemd bug? should i make the ebtables bug a dup of that? [12:16] ddstreet: hey! Sorry I missed your ping yesterday, I have a very fragmented week this week [12:16] ddstreet, i've only added this to the ebtables bug. [12:16] ddstreet, i have no idea what debhelper/systemd bug you are reffering to. [12:16] ddstreet, url? [12:16] * xnox is only talking about ebtables [12:17] ddstreet: I was supposed to look at your patch and actually answer you but then it somehow just eh, slipped [12:17] https://bugs.launchpad.net/bugs/1748147 i assumed you were referring to < [12:17] Launchpad bug 1748147 in debhelper (Ubuntu Bionic) "[SRU] debhelper support override from /etc/tmpfiles.d for systemd" [Medium,In progress] [12:18] xnox if WSL needs a more general 'never run init scripts' fix in 40-systemd, i'll let rbalint take that over, since he has WSL access and I don't [12:18] ddstreet, nope, that is also all wrong, differently. [12:18] ddstreet, that would be fix in 40-systemd on ubuntu. [12:19] in systemd package [12:19] i think [12:19] and not related at all. [12:20] xnox as you seem to have the plan to fix this ebtables (or not ebtables) bug on WSL, can you (or rbalint) take it over? [12:21] or, we can push the ebtables change in, and fix it how you're suggesting later [12:22] i have an opinion what should be done in general. I don't know if it is sensible or not. hence the ping to rbalint etc. [12:23] what if... for each package... Ubuntu released a jigdo file [12:24] and you take the base package—the original release for the distro version—and ran a bsdiff against the base package and each revision [12:24] Bluefoxicy, that would be worse than existing ddebs implementation, which was proven to be too slow to be useful. [12:25] xnox: ah [12:25] Bluefoxicy, and there is a better ddebs implementation in the works by apt upstream; maybe talk with juliank [12:25] xnox: binary patching is slow? [12:25] Bluefoxicy: binary patching is fast [12:25] binary diffing is slow [12:25] oh yeah, no kidding. [12:25] well, the existing binary patching is slow [12:26] juliank: I was considering that you can roll back to $ORIG, then patch forward to $NEW [12:26] Bluefoxicy: here's the spec https://wiki.debian.org/Teams/Dpkg/Spec/DeltaDebs [12:26] Bluefoxicy, current ddebs implementation, tries to reconstruct new .deb then install it, which is slow end-to-end. new one i believe replaces changed files, or something rather, which is actually usable. [12:26] names are not final [12:26] juliank: and also that you can assemble the uncompressed archive from its shell and installed files [12:27] so you can skip downloading much of the original package file by validating the SHA256 of installed files, patch them down to original version, grab the remaining files individually from online [12:28] I'm not sure what you're saying [12:28] Let's be clear [12:28] juliank: are you delta patching in every direction or only from one version? [12:29] ah you are [12:29] delta patches are supposed to be release->updates, updates->updates, security->security and so on [12:29] juliank: nod. So you have to generate a patch from every update to every other update, or chain them and patch several times [12:30] Bluefoxicy: So essentially, you are saying I should generate two deltas instead: update->base, and then base->new update [12:31] this would mean that every delta exists in two directions [12:31] well, the way you have it is viable—delta patches are small, after all [12:31] hmm. Delta patches don't apply in reverse? [12:31] no, of course not [12:31] heh. diffs reverse, binary deltas don't. [12:31] ok [12:33] juliank: Mostly, I'm suggesting you can actually generate the full, signed deb for any version by reassembling patched installed files, downloaded individual pristine archive contents, and the hollowed out shell of a signed archive [12:33] you can't [12:33] Why not? [12:33] well, not reliably [12:33] also you don't want to [12:33] Reliably because compression algorithms change [12:33] ah [12:34] I thought the archives were signed before compression, then compressed [12:34] And because they are compressed, even if the algo is the same, you have to recompress them, which is slow, and is what kills debdelta [12:34] yeah you use xz over there and it's slower than hell [12:35] Bluefoxicy: They are not signed at all. We hash the entire .deb and essentially (transitively) sign a hash of that [12:35] ah [12:35] I thought the deb was an archive with two files and a hash on top, all compressed arbitrarily [12:36] BTW, signing (only) uncompressed content is a bad security model, as it allows exploiting decompressor bugs [12:36] nod [12:36] Bluefoxicy: Anyhow, with my delta approach, the delta deb simply contains a tarball of deltas for all installed files [12:37] rather than a delta of the tarballs [12:37] or something [12:37] you can regenerate the original tarball from it [12:37] you can likely regenerate the original deb (unless compressor changed in the meantime) [12:37] "If files match dpkg db hashes; pick the delta, otherwise full deb" [12:37] yes [12:37] heh [12:38] So essentially, dpkg records hashes for all files [12:38] trying to store all the individual files unpacked up on the mirror would be murderous. [12:38] we calculate the id = hash((path, hash) of all files of a package) [12:39] (excluding conffiles) [12:39] so essentially sha256sum(/var/lib/dpkg/info/apt.md5sums) [12:39] then recalculate the hashes in there, sha256sum the new list [12:39] and check that they match [12:39] I guess you could store the individual files for the base version and then chain patch an individual miss instead of getting the whole package [12:39] but that might also be excessive [12:39] A couple of people (including me) have looked at offering a mirror based on something like casync, so you get package dedup and strong hashing. [12:40] Faux: it does not work [12:40] Not in the mood for discussing it right now. [12:40] Faux: And I forgot what my argument was, as I investigatd casync for something months ago [12:40] why does everyone always say stuff Lennart Poettering designed does not work? [12:41] Bluefoxicy: Oh, I meant the whole block-wise thingy does not work for binaries [12:41] I know I know :P [12:41] in any case, this looks good [12:42] Bluefoxicy: One thing I want to do is find windows in binary files and run the bsdiff algorithm on those windows [12:42] juliank: do you have a method to transplant kernel binaries and headers with this? [12:42] that would be nicer. [12:42] casync has windowing, not blocks. [12:43] Faux: I vaguely recall adding multiple versions of uncompressed packages or something and it failing to achieve useful results. [12:43] Note this was last year [12:43] Bluefoxicy: What do you mean? [12:44] There are issues, like openjdk being a source package full of gzip files, but generally I found it worked very well without too much prodding. [12:44] Bluefoxicy: Patch /boot/vmlinuz-4.15.0-20-generic using /boot/vmlinuz-4.15.0-19-generic and firends? [12:44] juliank: Kernel images don't patch in place; they install a whole new kernel [12:44] yeah [12:44] Right [12:44] I'm not entirely sure yet [12:44] We could have simple regular expressions to sanitize paths [12:44] plus kernel modules and headers are huge [12:45] or we could try to find the closest match [12:45] but I think path sanitization works best [12:45] I want to generate deltas without having to unpack the debs I'm deltaing [12:45] which relies on the fact that the list of packages is ordered [12:46] eh, I'd think you could say "X file in $PACKAGE is the basis for Y file in this package" [12:46] s/packages/files/ [12:46] Bluefoxicy: well, yes, you'd say, given new filename, s/vmlinuz-.*/vmlinuz/ [12:46] hm [12:47] so it would basically treat /boot/vmlinuz-4.15.0-20-generic as /boot/vmlinuz and /boot/vmlinuz-4.15.0-19-generic too [12:47] you don't want to have to encode this for each new pair of packages [12:47] nod [12:47] probably more useful for /lib/modules than /boot [12:47] since vmlinuz is compressed [12:47] thus is subject to chaotic entropy [12:48] Also likely, s/(lib.*.so).*/\1/ [12:48] so you can patch libfoo2 using libfoo1 [12:48] Bluefoxicy: Yes, vmlinuz delta is bigger than vmlinuz [12:49] what about headers? They're not binary, and diff might work better than bsdiff [12:49] I think bsdiff works optimally for everything [12:49] diffs are far bigger [12:49] fair enough [12:49] bsdiff is built for diffing binary files, and that's strictly more complex than diffing text files [12:49] so it performs well on those too [12:52] we also need to do the same for package names, so linux-$foo-$ver1-generic diffs against linux-$foo-$ver2-generic [12:52] and so on [12:52] But I guess we can just do delta by source package [12:52] hmm [12:52] and then for each binary package, find the binary in the old package with the shortest distance in name [12:53] all I know is I've been waiting forever to not have to download 600MB of crap for a one-line LibreOffice exploit fix [12:53] essentially find the old package subject to min Levenshtein distance(old name, new name) [12:54] should be fine I guessd [12:54] otherwise, encode again [12:54] using regex [12:54] Bluefoxicy, .deb may have arbitrary number of archives, thus we have to hash/sign the whole .deb, not it's archives. as then one may be able to append extra archives to it, bypassing security. [12:55] I think when generating deltas we could also just extract file lists from control.tar, and then build pairs using shortest distance [12:55] this would handle the whole versioning case of kernels, libraries and stuff [12:56] juliank: is there a way to know with certainty the impact of installing packages? [12:57] or is it something you can't evaluate without completing the install? [12:57] what do you mean? [12:59] let's say I tried to background upgrade LibreOffice by mounting a FUSE filesystem over / such that if I tried to perform an operation on a file altered by the 30 or 40 packages queued in the update process, it would delay that access and move the package affecting that file to the front [12:59] would I be able to solve for whether or not a particular file is affected by the update process, or is that strictly unknown until all installations are complete? [12:59] that sounds scary [13:00] quick question about the dpkg locking email: does this mean i can run apt on the command line while eg synaptic is running (but not doing anything)? [13:00] and what does it mean for eg bootstrapping a chroot where i run "dpkg --configure -a" non-interactively from a script on the chroot? [13:00] yeah, old stuff I thought up in 2006 [13:00] the assumption is that none of the files currently installed might change while the upgrade is running [13:00] as usual, excluding conffiles [13:01] ali1234: no you can't, that's the point [13:01] ali1234: and there is no change with dpkg --configure -a [13:01] i already can't though so what changed? [13:02] ali1234: you could [13:02] ali1234: So the problem is that before we run dpkg, we need to drop its lock so dpkg can acquire it, leaving a short time window where you could in fact run apt [13:02] if i try to run apt while synaptic is running it will fail because it can't aquire the lock - it has done that for years and years [13:02] ah, i see [13:03] so this just closes that race condition? [13:03] juliank: I've thought of a lot of scary things, mostly around hiding update processes and protecting the base filesystem [13:04] ali1234: yes [13:04] Solving the race condition is all it does [13:05] got it, thanks [13:05] And dpkg --configure -a does not change, but I see that this is misleading [13:05] I mean to say that if you run dpkg manually _in your apt frontend_ while you hold the lock as you should, then you need to tell dpkg that the lock is held already [13:06] if you just run dpkg on your own, and not as part of any frontend that holds the lock, you of course don't have to set the variable [13:06] * Bluefoxicy heads off to do other things [13:07] Laney: I was just checking back on the tests that failed before, in the meantime all arches passed ont he two fails they had in proposed [13:23] ddstreet, i'm ok with the latest proposed patch of yours and disagree with xnox regarding masking a lot of funcionality from the app which i'll detail in a moment in LP: #1774120 [13:23] Launchpad bug 1774120 in ebtables (Ubuntu Bionic) "ebtables cannot be upgraded from 2.0.10.4-3.5ubuntu2 to 2.0.10.4-3.5ubuntu2.18.04.1 on WSL" [Low,In progress] https://launchpad.net/bugs/1774120 [13:24] rbalint ok thnx, i will get that uploaded to cosmic and then sru the change [13:27] ddstreet, thank you i guess sil2100 was just busy, this is why he did not comment [13:46] This week I am a bit unreliable so to say [14:36] doko, https://lists.ubuntu.com/archives/ubuntu-release/2018-June/004512.html see request to the release team to document hints-merge-proposals [14:36] xnox: you know that https://wiki.ubuntu.com/ProposedMigration is a wiki, right? That anyone, including xnox, can edit? :) [14:38] rbasak, proposedmigration is owned by release team; and my request is specifically because there is lack of documentation; and i do not have the knowledge as to what is supported in hints. [14:38] rbasak, did you read all of my email? is it not clear, that it's unknown what is supported. [14:44] rbasak: disagreed. that should be done by the release team, or we make the hints owned by the core-devs and then we can document and patch it [14:46] doko: documentation can be written by anyone. While it'd be nice if the release team does it, I don't think it's appropriate to demand that they do when someone else easily can. We certainly need the release team to make any necessary decisions, but I don't think there are any decisions that need making here. [14:47] rbasak: no, because I don't know what to document [14:48] I do (roughly, and I can read the source to make sure), and I'm not on the release team. [14:48] if you're volunteering, nice! [14:49] I can't volunteer for everything, sorry. [14:49] My point is that I don't think it's appropriate to demand work from any particular Ubuntu team. [14:50] Especially when the work isn't exclusive to that team (anyone can do it) [14:50] so ubuntu-release enforces things on migration, but doesn't say what needs to be done for migration? [14:51] They do say what needs to be done. If you ask they'll tell you. You're demanding (unreasonable) that it be documented in a particular (perfectly reasonable) way. [14:51] When you could just do the documenting yourself! [14:52] rbasak, currently the branch is owned by them; and only they know what works in that branch; and neither me nor doko nor you know the brintney hints syntax as supported by britney as deployed by the release team. [14:52] I told you, I don't know how things should be done [14:52] rbasak, or do you? [14:52] rbasak, are you on the release team? [14:52] 15:48 I do (roughly, and I can read the source to make sure), and I'm not on the release team. [14:53] rbasak, cute. i think release team can speak for themselves, and reply to my email, and don't need you =) [14:53] I know the britney hints syntax because I read the code when I wanted to find out how it was done and how to submit something. [14:53] rbasak, i don't think that needs to be done by everybody; and release team don't actually accept all the hints that are possible, and have specific requirements emposed by them as to the type of things they accept. [14:54] Honestly I'm quite astonished that you either don't seem to consider yourself capable of that or think it needs to be somebody else's job to sort it out. [14:54] i don't think that needs to be done by everybody> agreed - so learn it and document it [14:54] rbasak, i've submitted a few hint merge requests; and frequently the syntax of my hints was valid; but ineffective as it didn't match anything. [14:54] xnox: so ask for help and document the result. [14:54] rbasak, they are not-trivial to wildcard. and take a while to see that they ddidn't work. [14:55] rbasak, that's what i'm doing lol [14:55] No, I don't think you are. [14:55] You're asking that someone else use their crystal ball to learn your edge cases and document those. [14:56] If someone tries and is successful then great. [14:56] But I honestly think it'd be far more productive for someone like you to start some skeleton documentation and fill it out with the edge cases you know about to start, or at least ask for clarification on specific edge cases you're unsure about. [15:30] Dear Kubuntu, Mate, and Gnome flavour folks: Please also fix bug https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1750465 in stable releases. [15:30] Launchpad bug 1750465 in ubuntu-mate-artwork (Ubuntu Xenial) "upgrade attempting to process triggers out of order (package plymouth-theme-ubuntu-text 0.9.2-3ubuntu17 failed to install/upgrade: dependency problems - leaving triggers unprocessed)" [High,Confirmed] [15:31] * juliank just raising awareness [15:36] xnox: hi, how is the next systemd release in bionic going to be versioned? [15:37] tseliot, with your debdiff stuff included it should be 237-3ubuntu10.2 [15:38] xnox: ok, as I was thinking of including my patches in a temporary PPA. I'll version the package something like 237-3ubuntu10.2~ppa1, so that it doesn't clash with yours [16:12] If anyone fancies a bit of initramfs-tools work, https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1667512 came up on ubuntu-users recently and I'm pretty sure there's an easy fix for it, though I don't have time to prepare it myself. See my comment #14 [16:12] Launchpad bug 1667512 in initramfs-tools (Ubuntu) "update-initramfs hangs on upgrade, dpkg unusable, unbootable system" [Undecided,Confirmed] [16:32] cjwatson: hi, do you have time to talk about grub-pc + grub-efi-amd64? https://code.launchpad.net/~sil2100/ubiquity/+git/ubiquity/+merge/348201 [16:33] slangasek: Not quite right now, I need to go out soon. Tomorrow? [16:33] cjwatson: [16:33] cjwatson: ok. I'll braindump a little here anyway [16:34] cjwatson: what I've tried to design here is that the user always gets shim-signed + grub-efi-amd64-signed + grub-efi-amd64-bin + grub-pc + grub-pc-bin; since the binary bits come from grub-efi-amd64-signed + shim-signed in the secureboot case, it seems unnecessary to also have grub-efi-amd64 present to also run grub-install, rather than have it run when the unsigned binaries are updated; if you [16:34] want grub-pc + grub-efi-amd64 to be made coinstallable, that seems possible but unnecessary [16:39] slangasek: I think it's more logically coherent for them to be coinstallable, honestly. Otherwise you've set up a situation where in the future some user thinks "hm, I don't use BIOS booting, lemme remove grub-pc", and then their system keeps working fine for a while until some time later it doesn't [16:39] cjwatson: ok. I've suggested addressing that piece by having grub-efi-amd64-signed depend on grub-pc | grub-efi-amd64 fwiw [16:40] grub2 maintenance has had quite a few of these kind of timebombs before and I want to design them out [16:40] That's a really weird contortion just to avoid removing the Conflicts [16:40] ok [16:40] removing the conflicts> they share a conffile currently, the all-important kernel postinst hook [16:41] anyway, happy to talk more tomorrow :) [16:41] All right, I can see that that might involve a bit of fiddling. Still think it's the better path [16:42] That could probably be moved to the existing grub2-common without too much difficulty [16:43] (Obviously with some care to ensure it doesn't do anything if none of the "owns boot" packages is installed) [16:43] +1 [16:43] fwiw I suggested moving some of these things (like the /etc/default/grub generation) to grub2-common [16:43] I don't think it's necessary to move that bit [16:43] my concern right now is what happens when you try to have both installed on MBR; I'd expect -efi to just fail [16:44] If both postinsts try to do that work, that isn't significantly different from two successive versions of the same postinst doing it [16:44] cjwatson: it was to fix an issue caused by not removing the conflcits, etc. -- not necessary [16:44] Right [16:44] OK, so you're trying to have the EFI bits installed even if the installer is running on MBR? [16:45] no, we're trying to have the EFI and MBR bits installed when you do an EFI install [16:45] which I guess means it should also happen when the installer is running on MBR [16:45] Well, if that's only a side benefit, then you could just not do that bit [16:45] installer on UEFI -> grub-pc + grub-efi-amd64; installer on BIOS -> grub-pc [16:46] sure sure [16:46] (Also, are you sure grub-pc won't fail on some UEFI systems?) [16:46] of course not ;) [16:46] but I'm also not the instigator of it, nor am I actually doing the implementation [16:46] English is crap, I meant you plural [16:46] heh [16:47] well, I can't think of why grub-pc would fail on x86 UEFI; but that doesn't mean it can't [16:47] how yous doin [16:48] Writing to arbitrary bits of the start of the disk is probably unspecified on UEFI. I certainly don't know that it *will* fail [16:48] dpb1: Yeah, I use that in spoken English but it still feels weird in print [16:48] cjwatson: even on a GPT with nothing special you should be able to write to LBA0 [16:48] probably my problem [16:49] cyphermox: Yeah, maybe it's OK, and I guess youse have time to work out the kinks [16:49] yu[ [16:49] sibh [16:50] anyhow, I can certainly work on moving the kernel hook over [17:50] cjwatson, cyphermox, slangasek: ok, so is the consensus now that it's ok to install both grub-pc and grub-efi-amd64-bin on EFI installations? Or do you actually want to go the mentioned path of making grub-pc and grub-efi-amd64 coinstallable first? [17:50] i.e. should I trash the MP I prepared? [17:51] Would be nice if we could get a consensus this week, sucks to have cosmic unbootable for EFI for so long [17:54] sil2100: are you talking about #1668148? [18:01] psusi: no, that's a different thing, it'll be worked on here: https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1766945 [18:01] Launchpad bug 1766945 in ubiquity (Ubuntu) "(EFI on top of legacy install) choosing "replace" or "resize" options in partitioning may lead to an install failure" [Critical,Confirmed] [18:02] psusi: we're talking about LP: #1775743 now [18:02] Launchpad bug 1775743 in ubiquity (Ubuntu) "[regression] Cosmic daily images 20180606-11 install but boots only to grub prompt on EFI systems" [Critical,In progress] https://launchpad.net/bugs/1775743 [18:32] sil2100: it's certainly not /possible/ today to install both grub-pc and grub-efi-amd64. And I think the changes to ubiquity were only about not removing grub-pc, so it should still be per se correct to drop that code? [18:43] slangasek: yeah, was just asking if maybe you want the grub-pc not being purged wait for when they're co-installable - ok, then I'll poke cyphermox to maybe merge my ubiquity change and then release it to cosmic (or I can release it, but I have no powers over the git branch) [18:45] sil2100: AFAIK we've done all the other packaging changes such that the ubiquity change can land as-is, and if we want to revisit particular details of the grub packaging, we can do that independently. At the end of the day, we will be installing grub-pc. [18:46] tsimonq2: what's the status of bug 1777205 in cosmic please? [18:46] bug 1777205 in python-acme (Ubuntu) "python-acme to start crashing on June 19th" [Undecided,New] https://launchpad.net/bugs/1777205 [18:51] rbasak: Fixed already. [18:51] tsimonq2: thanks. I updated the bug. [18:52] rbasak: (The bug description says it's fixed in 0.25.1, which has been in Cosmic since the 14th, ftr.) [18:52] rbasak: Thank YOU. :) === Pharaoh_Atem is now known as Conan_Kudo === Conan_Kudo is now known as Pharaoh_Atem [20:01] ok, so I guess do-release-upgrade is being kept from offering 16.04->18.04 until after 18.04.1 to get the bugs out... but how do you test to find the bugs when it won't even offer with do-release-upgrade -p ( check -proposed pocket )? How about getting it in -proposed? [20:05] psusi: You may want -d. [20:11] Odd_Bloke: nope... that goes to 18.10 [20:12] bdmurray: ^^ [20:14] psusi: You need to fix your prompt= line in /etc/update-manager/release-upgrades -d should work. http://changelogs.ubuntu.com/meta-release-lts-development [20:16] psusi: You can read more here http://www.murraytwins.com/blog/?p=147 [20:17] * Odd_Bloke just ran that in a xenial container and it offered bionic. [20:17] bdmurray: right, sorry for highlighting, I confirm as well here that it works as expected [20:17] but if Prompt=lts is not set, then do-release-upgrade should have already been offering an upgrade, to an obviously earlier release? [20:18] confirmed this as well ;P [20:18] everything working as designed [20:21] bdmurray: ahh, thanks. [20:36] doko, xnox, rbasak: https://wiki.ubuntu.com/ProposedMigration now suggests creating merge requests for hints ☮ :-) [23:48] rbalint: thanks!