[00:11] <johnnyfive> Is it possible to have the hashes and an InRelease file at the component level of a repo, as opposed to the .com/dists/ubuntu/Release file?
[00:12] <cjwatson> Nothing to stop you preparing such a thing, though some assembly would be required.
[00:13] <cjwatson> https://wiki.debian.org/DebianRepository/Format may be helpful.
[00:13] <johnnyfive> cjwatson, the question is will apt handle it the same? I'm about to write the logic just hoping someone would have some insight before I spent all the time
[00:14] <cjwatson> Sort of.  You'd have to write an explicit path in sources.list, since apt special-cases the dists/$suite/InRelease lookup.
[00:14] <cjwatson> The obvious question would be why bother ...
[00:16] <cjwatson> (It's a fair bit of work, and doesn't seem to buy much)
[00:18] <johnnyfive> Yeah, i'm not even sure how to explain the situation without writing a book. Basically we serve custom compiled software that are drop-in replacements (in this case for ubuntu/debian), and we treat each component as an independent object. There's reasons beyond that, but that's the gist of it.
[00:18] <johnnyfive> Thanks cjwatson
[00:19] <johnnyfive> by "custom compiled" I mean the entire public repo, but compiled differently
[00:19] <cjwatson> johnnyfive: Right, but none of that requires changing the layout.
[00:19] <cjwatson> johnnyfive: It's certainly possible to have pretty much whatever layout you want, though; it can just end up looking a bit odd in sources.list.
[00:20] <johnnyfive> I see. Ya trying to avoid non-standardness from the clients perspective.
[00:20] <cjwatson> (you tend to end up writing "./" in place of the component)
[00:20] <johnnyfive> ah
[00:20] <cjwatson> johnnyfive: You could just have a bunch of separate repositories.
[00:21] <johnnyfive> cjwatson, ya that's an option.
[00:21] <johnnyfive> and might be what we do.. we'll see. thank you again
[00:21] <cjwatson> johnnyfive: Anyway, InRelease lists the relative paths to the other index files that make up the repository, so you have some flexibiility in how that's laid out if you want to use it.
[00:21] <cjwatson> *flexibility
[00:22] <cjwatson> np
[06:59] <RAOF> Grump.
[06:59] <RAOF> Why has snapd suddenly decided to die.
[07:01] <RAOF> ”error: cannot parse /proc/self/mountinfo: incorrect number of tail fields, expected 3 but found 5”. Hrm. This is probably due to me not running an Ubuntu kernel?
[07:12] <Faux> That'd be my guess, yes.
[07:16] <infinity> RAOF: https://github.com/snapcore/snapd/pull/4969
[07:17] <infinity> RAOF: Might want to dogpile on that if it fixes fstab but not mountinfo (but maybe the same codepath)
[07:19] <infinity> RAOF: Oh, the commit message mentions both, so yeah.
[07:21] <RAOF> …bcachefs /dev/nvme0n1:/dev/sda3…
[07:21] <RAOF> I suspect things are confused by the “:” in there.
[07:21] <infinity> RAOF: Perhaps.  Perhaps not.  But that commit just makes it skip over entries it doesn't like instead of exploding.
[07:22] <infinity> Which seems entirely reasonable, given that I can't figure out why snapd needs to parse fstab/mountinfo in the first place. :P
[07:23] <infinity> RAOF: I imagine rolling back to the artful-updates version until the above lands should do the trick.
[07:24] <infinity> Or snag an older bionic binary from the librarian.
[07:24] <RAOF> infinity: I think snapd is parsing mountinfo in order to do crazy things in the presence of NFS.
[07:24] <RAOF> Yah.
[07:24] <infinity> RAOF: Oh, I'm sure it has its reasons.  I'm just not sure I'd agree with them if I dug into it, so I've chosen to remain ignorant.
[07:24] <RAOF> :P
[08:36] <RAOF> infinity: Harumph. I think the change must have been made in the core snap, because installing old snapd versions doesn't change the behaviour at all.
[08:41] <juliank> I'm annoyed by submittodebian using --include rather than --attach for the debdiff, IMO this just makes things harder to apply and the email very long :/
[08:48] <juliank> Oh I can fix that
[08:58] <doko> tjaalton: you want to subscribe ubuntu-mir for 1761095 ...
[09:05] <xnox> root     15001  0.0  0.0  15868  3328 pts/0    S+   09:56   0:00  |   |               \_ /bin/sh /var/lib/dpkg/info/lxd.postinst configure 3.0.0-0ubuntu1
[09:05] <xnox> root     15259  0.0  0.0  67860  5748 pts/0    S+   09:57   0:00  |   |                   \_ /bin/systemctl restart lxd-containers.service
[09:05] <xnox> root     15263  0.0  0.0  61796  3060 pts/0    S+   09:57   0:00  |   |                       \_ /bin/systemd-tty-ask-password-agent --watch
[09:05] <xnox> this is odd...
[09:05] <xnox> stgraber, is it normal that upgrading lxd takes a long time, and blocks?
[09:06]  * xnox wonders if it is my own fault of suspending the desktop whilst upgrades are running
[09:07] <xnox> killed systemctl restart, and the upgrade continued.....
[09:24] <doko> rbalint: rax-nova-agent promoted. it needs seeding somewhere, or a reference in a package
[09:46] <xnox> doko, rbalint - probably into the platform cloud supported seed, at the very least.
[09:47] <xnox> done
[09:48] <doko> juliank: missing test dep? https://launchpad.net/ubuntu/+source/socat/1.7.3.2-2ubuntu1/+build/14528471
[09:49] <juliank> doko: apparently :(
[09:49] <juliank> I wish I had working sbuilds
[09:50] <juliank> (no I don't want to use chroots, I want to use lxd containers)
[09:51] <doko> we are definitely missing an up-to-date description how to run autopkg tests locally
[09:52] <juliank> Oh, autopkgtest are fine, but these are build-time tests
[09:54] <Laney> doko: http://packaging.ubuntu.com/html/auto-pkg-test.html#executing-the-test definitely?
[09:54]  * juliank runs build in lxc container with net-tools b-d added
[09:55] <doko> Laney: qemu, not lxd?
[09:55] <juliank> qemu is definitely a good choice
[09:55] <juliank> too many problems with lxd
[09:56] <juliank> some tests need machine-isolation, others just don't work in lxd
[09:56] <juliank> and only armhf uses lxd IIRC
[09:57] <Laney> Right, but sometimes failures are lxd failures rather than armhf failures and for those it's good to be able to run lxd tests locally
[09:59] <juliank> Laney: yes, but still qemu is the only documented way in that page
[10:01] <Laney> The initial complaint was that we were "definitely missing an up-to-date description how to run autopkg tests locally", not that the page we have doesn't precisely specify how tests are run on autopkgtest.u.c. We are not "definitely missing an up-to-date description how to run autopkg tests locally".
[10:02] <juliank> that's true
[10:03] <Laney> But maybe it would be a good idea if it can be done simply --- https://code.launchpad.net/~ubuntu-packaging-guide-team/ubuntu-packaging-guide/trunk is pushable to by all developers if somebody wanted to addd that.
[10:03] <juliank> now i wonder whether we want autopkgtest-virt-multipass ...
[10:04] <Laney> If it were me I'd use the SSH runner for new virt types, and write setup scripts for those
[10:04] <Laney> instead of new autopktest-virt-foo
[10:04] <Unit193> I found a nice wikipage or somesuch detailing concisely how to use autopkgtests, but I have lost it.
[10:26] <tjaalton> doko: oh indeed
[10:55] <juliank> doko: ubuntu2 should fix that
[10:55] <juliank> (the tests pass in PPA now)
[12:04] <doko> jbicha_: what exactly keeps gnome2 in main? openjdk removed it's dependencies
[12:12] <jbicha> doko: you mean gtk2?
[12:18] <darkxst_> bdmurray, did you ever see my tasksel patch bug on 1751546
[12:31] <doko> jbicha: yes
[12:32] <jbicha> reverse-depends -c main src:gtk+2.0
[12:33] <jbicha> nvidia-settings is an alternate dependency so not important (and I tried arguing for its removal…) LP: #1736617
[12:33] <jbicha> I think it will be safe to drop the thunderbird dependency on gtk2 with thunderbird 60 (52 still supports non-Flash NPAPI plugins)
[12:34] <jbicha> that leaves gparted and a few input methods: those are only used on the live ISO (unless you use one of those languages)
[12:36] <jbicha> the new Community theme recently got a dependency on gtk2-engines-pixbuf for gtk2 support, but at least it doesn't directly depend on libgtk2.0-0
[12:37] <jbicha> so I expect Ubuntu 18.04.1 to not include libgtk2.0-0 in the default install (assumes Thunderbird is updated by then)
[12:37] <didrocks> (yes, I checked that before promoting)
[12:50] <doko> jbicha: what about gparted?
[12:53] <rbasak> xnox: when working on mongodb 3.6, did you ever get a hang on tests?
[12:53] <rbasak> I'm seeing "./dbtest --list" hang.
[12:53] <jbicha> um, you want to switch to gnome-disks? gparted doesn't support Wayland so that's actually a technical justification…
[12:53] <rbasak> Haven't dug into it deeper yet.
[12:53] <jbicha> (obviously, not for 18.04 but maybe for 18.10)
[14:24] <xnox> juliank, can you join #ubuntu-kernel?
[14:24] <xnox> please
[14:28] <xnox> rbasak, hmmm...  i do not see my reply in scrollback. I had hanging tests and failures; i did not debug all of them; I did manage to get the stripped down build to pass on bionic with patches. But not a full one.
[14:29] <xnox> rbasak, make sure you do not run out of disk space; and that test binaries are stripped; and that you have networking available; these are the top culprits to check for.
[14:34]  * juliank wants more people to use the daily apt builds in the PPA https://launchpad.net/~deity/+archive/ubuntu/sid - testing stuff is nice :D (devel only, really; though it also builds current stable...)
[14:36] <jbicha> juliank: post to Planets, post to https://ubuntuforums.org/forumdisplay.php?f=427 , post to https://community.ubuntu.com/
[14:37] <juliank> well, preferably developers, not the average user :)
[14:37] <jbicha> developers might be more risk-averse than the ubuntu+1 testers ;)
[14:38] <juliank> I should just have a cron job that copy-packages the PPA package to devel
[14:38] <jbicha> for instance, a lot of people on that forum run bionic-proposed even though they know we don't recommend that…
[14:38] <juliank> yeah, but then I get the problems from that reported as my problems
[14:39] <jbicha> yeah, I try not to maintain PPAs these days :)
[14:39] <juliank> anyhow, I use that repo, and I don't perform any checks when doing releases - a release is basically just a random snapshot
[14:40] <juliank> (everything on master passes CI essentially, and that's the requirement for a release too, apart from feature completeness)
[14:40] <juliank> though, if we do break ABI/API soon, things might be different, hmm
[14:42] <juliank> (as in, cache files might change in incompatible ways in snapshots but have same version in them)
[14:58] <rbasak> xnox: OK, thanks
[20:50] <tjaalton> slangasek: pam is still missing that NMU from debian to add support for pam-auth-update --enable, do you want to merge that or can I add that (and just that)?
[20:51] <slangasek> tjaalton: feel free to add it.  I was somewhat blocked on making further changes to pam waiting for server team to do something with https://code.launchpad.net/~vorlon/ubuntu/+source/pam/+git/pam/+merge/341556
[20:51] <slangasek> nacc: ^^ ?
[20:52] <slangasek> oh, and that merge now shows a bunch of conflicts, how did that happen
[20:53] <tjaalton> ah right, I remember this now
[20:54] <slangasek> nacc: a merge being invalidated because I now have conflicts, when the thing I'm proposing for merge is the actual content of the archive, gives me flashbacks to the bzr udd
[21:49] <nacc> slangasek: looking, one moment
[21:50] <nacc> slangasek: debian moved ahead of your base
[21:50] <nacc> slangasek: so it is a merge conflict in Git terms
[21:50] <nacc> slangasek: since the target of your LP merge is a branch and not a tag (which I believe actually isn't supported while technically possible)
[21:51] <nacc> slangasek: 1.1.8-3.7 specifically, is in debian/sid and not an ancestor of your branch
[21:51] <nacc> slangasek: so a rebase and force push should fix that
[21:51] <slangasek> nacc: indeed; but process-wise, that really seems like it shouldn't invalidate the mp
[21:51] <nacc> slangasek: it's because of how LP figures out your target
[21:51] <nacc> slangasek: i *believe* it doesn't save the commit hash of when it ran
[21:51] <nacc> but i'm not sure
[21:51] <nacc> cjwatson: --^ perhaps you can say?
[21:52] <nacc> slangasek: we have worked aroudn this on our team by doing the MP review manually
[21:52] <nacc> (ignoring hte LP UX) to merge to an older Debian version than the latest
[21:53] <slangasek> nacc: ok, so in principle the fact that LP has moved on should also not make more work for me, it's just a question still of someone on the server team reviewing and merging that?
[21:53] <nacc> slangasek: right, and it will still show up in your grep-merges after that
[21:53] <nacc> since debian would be ahead of our base in ubuntu
[21:53] <nacc> but it's technically possible to review as-is, and I'll try and do that now
[21:55] <nacc> slangasek: this is a gap because we aren't actually doing a Git merge in our process
[21:55] <slangasek> ah :)
[21:56] <nacc> we are taking your proposed source and tagging it, then the importer integrates it as 'rich history'
[21:56] <nacc> but there's no way to 'review' a branch in a nice way other than MPs currently
[21:56] <nacc> well, there is, but it's mailing lists or manually
[21:58] <nacc> slangasek: oh wait, this was already uploaded?
[22:00] <nacc> slangasek: so, because it was already imported (and we've already reimported pam), we won't see any upload tag i create now
[22:00] <nacc> I *can* create one, so that on a future reimport, if it occurs, it will get integrated (I see the trees match, so it will)
[22:01] <Unit193> juliank: https://sourceforge.net/p/squashfs/code/ci/e38956b92f738518c29734399629e7cdb33072d3/
[22:06] <nacc> slangasek: in any case, that means that MP doesn't really mean much (it's effectively done)
[22:15] <slangasek> nacc: ok, so the mp should be rejected?
[22:15] <nacc> slangasek: i can either push an upload tag so it's at least preserved in the future (but not integrated until someone does a reimport, if they do) or i can reject it
[22:16] <slangasek> nacc: it was a fair amount of work to create that branch, I'd rather the rich history not be lost
[22:16] <nacc> slangasek: ack, one moment
[22:16] <nacc> slangasek: i'm well aware this is a bottleneck
[22:17] <nacc> slangasek: we plan on fixing this long-term a la dgit by writing the hash into the source upload
[22:17] <nacc> slangasek: and then teaching our importer to look for that field, and querying LP for the corresponding Git commit
[22:18]  * slangasek nods
[22:19] <nacc> i'm not 100% on what the 'right' state is
[22:19] <nacc> i'll mark it as merged, even though it's not actually :)
[22:19] <nacc> but it's available in the importer repo now (git fetch pkg should bring down an upload tag)
[22:42] <cjwatson> nacc: that doesn't sound like an accurate description of what's going on; it's more that the point of a merge is, well, to be merged, so there isn't much point in saying that there wasn't a merge conflict when the target was what it was originally
[22:42] <cjwatson> put another way, there's *never* a merge conflict against the merge base, by definition
[22:43] <nacc> cjwatson: you're right
[22:43] <cjwatson> (I haven't looked at this MP specifically, just general comments)
[22:43] <nacc> cjwatson: but the issue we ran into is you can't specify a merge to a tag
[22:43] <nacc> or an arbitrary ref, really
[22:44] <cjwatson> nacc: sure, because you couldn't land such a thing
[22:44] <nacc> cjwatson: right :)
[22:44] <nacc> well, not without knowing how we do land them :)
[22:44] <nacc> cjwatson: we really don't want users to do branch manipulation because it's so easy to get wrong
[22:44] <cjwatson> merges to tags are nonsensical by definition
[22:44] <nacc> but we need to reivew things that will end up resulting in branch manipulation, if that makes sense
[22:45] <cjwatson> the only reason it even came up was that you were trying to use it as a diff generation mechanism
[22:45] <nacc> because we're not really doing Git merges in the first place :)
[22:45] <cjwatson> which you might be better off doing by way of constructing cgit URLs on git.launchpad.net
[22:45] <nacc> yeah
[22:45] <tjaalton> nacc: so how would I land the feature to pam-auth-update from -3.7? just upload or should it go via a merge proposal?
[22:45] <nacc> tjaalton: that's completely up to you :)
[22:46] <nacc> tjaalton: there is no requirement for anyone to use the Git repositories
[22:46] <nacc> tjaalton: however, you can provide rich history there, which can helpful :)
[22:47] <tjaalton> nacc: ah ok. well there's not that much history, other than on the bug..
[22:47] <nacc> tjaalton: right
[22:47] <slangasek> well, if the result of the next pam upload is that the rich history I created by preparing that mp is thrown away, there will be some table-flipping here
[22:47] <nacc> so the easy answer then, is use the upload tag as a starting point
[22:47] <nacc> tjaalton: --^
[22:47] <tjaalton> alright
[22:47] <nacc> and do a change on top
[22:48] <tjaalton> sure
[22:48] <nacc> propose iit as an MP to ubuntu/devel, and i'll tag it
[22:48] <nacc> tjaalton: just don't dput until i tag it :)
[22:49] <nacc> slangasek: then your rich history will be integrated at that point :)
[22:49] <slangasek> that would be lovely
[22:49] <tjaalton> ok, I'll look into it tomorrow
[22:49] <slangasek> nacc: now, how can this be self-serve instead of blocking on a team of reviewers who are a small subset of uploaders
[22:50] <nacc> slangasek: that's coming up next (once main is imported)
[22:50] <slangasek> ok
[22:50] <nacc> slangasek: we want people who can upload to a source package to be able to mark the MPs as approved
[22:50] <nacc> actually, scratch that, we do want that, but that doesn't answer your question
[22:51] <nacc> we have two approaches, and it's mostly about time/getting it right
[22:51] <nacc> short-term: uploaders of a srcpkg can mark an MP as approved, the importer can look for approved MPs when it processes a new upload and integrate them
[22:51] <nacc> (basically, as if they were upload tagged, without using upload tags)
[22:51] <nacc> long-term: writing the hash into the source pkg before dput
[22:53] <nacc> slangasek: in both cases, the uploader rights would determine the importer getting 'here is where rich history is' data
[22:53]  * slangasek nods
[23:02] <stanford_AI> do you know how to stream webrtc from /dev/video0 ?
[23:02] <nacc> stanford_AI: please stop crossposting
[23:02] <nacc> stanford_AI: also this is 100% the wrong channel.
[23:02] <stanford_AI> why's that?
[23:03] <nacc> stanford_AI: this channel is for development of Ubuntu (see /topic)
[23:03] <stanford_AI> where do i go for app devel?
[23:03] <nacc> !alis | stanford_AI
[23:04]  * stanford_AI msg Alis help list
[23:08] <Unit193> Topic also mentions a channel.
[23:13] <TJ-> Anyone familiar with how a Secure Boot via shimx64.efi is expected to handle the /EFI/ubuntu/grub.cfg ? Got a user where with S.B. enable it seems that might be be accessed causing grub to drop to the grub> shell. With S.B. off it works as expected.
[23:14] <TJ-> s/might be be/might NOT be/
[23:17] <roaksoax> TJ-: using grubx64 insteda of the shim
[23:17] <slangasek> TJ-: are you sure that /EFI/ubuntu/grub.cfg is the issue, and not whatever /EFI/ubuntu/grub.cfg chains to?
[23:18] <TJ-> roaksoax: right, the FW is cut-down/lock-down, so the boot menu has 2 options (shimx64.efi and grubx64.efi) but in S.B. mode only the shimx64.efi entry is available
[23:18] <slangasek> TJ-: the difference between booting with secureboot on, and booting with secureboot off, is whether grub is allowed to load additional filesystem modules from disk
[23:19] <TJ-> slangasek: yes, it contains the "search.fs_uuid " which matches the root-fs, but in S.B. mode the boot drops to "grub>" shell, without S.B. it boots fine
[23:19] <TJ-> slangasek: right, I'm wondering if this FW is 'customised' and is blocking any non-signed files, not just modules
[23:19] <slangasek> not possible
[23:19] <slangasek> what is the root fs?
[23:20] <TJ-> slangasek: FS type? ext4
[23:20] <slangasek> interesting
[23:21] <TJ-> lsblk: https://paste.ubuntu.com/p/gQwbjvjdRb/   /EFI/ubuntu/grub.cfg:  https://paste.ubuntu.com/p/chwS3YqHzY/  ls /boot/efi/  https://paste.ubuntu.com/p/K9SDxGy76N/  efibootmgr:  https://paste.ubuntu.com/p/dC93rQyxFp/
[23:24] <slangasek> TJ-: how about the contents of /boot/grub/grub.cfg?
[23:25] <TJ-> slangasek: I'll get them, I'm persuading the user to create a bug report now so we can capture the info
[23:25] <roaksoax> fwiw, this looks like https://bugs.launchpad.net/maas/+bug/1711203
[23:27] <TJ-> roaksoax: thanks, no that's not it, we never get that far. the grub menu is never reached :)
[23:28] <slangasek> TJ-: ok. bug reports against the shim-signed package in spanish are fine :P
[23:29] <TJ-> slangasek: :) the only report close but has no follow up or decent details is  Bug #1110080
[23:30] <TJ-> "But after this grub2 run, the system enter into grub shell, it seems it doesn't load grub.cfg file."
[23:40] <TJ-> User's bug report with attached files Bug #1761336