[00:03] Kind of loads better/faster in a browser in that case, for a little it was just faster to use w3m for me. :3 [00:27] xnox: it's that it no longer includes autopkgtest lines with no regressions i think [00:29] or always had regressions, right? [00:29] or was that a different page.. [00:41] sarnold: it no longer includes thousands of lines of perl-* Passed Passed Passed Passed Passed Passed (x100000000 times) [00:41] :D [00:41] xnox: Passed! [00:41] Thankgoodness the passed is in the past. [00:42] xnox: is that because you broke all the perl things? or removed them? :) === pieq_ is now known as pieq [08:05] xnox: mwhudson: thanks (mainly on behalf of elbrus - go tell him in #debci) :> [08:30] niub: libvirt-bin was replaced in like xenial or so [08:30] niub: the one replacing it will be libvirt-daemon-system [08:31] withut the -system suffix you just get the binaries but not the service/system integration [08:32] cpaelzer: so I should install libvirt-daemon libvirt-daemon-system libvirt-clients, am I right? [08:51] niub: libvirt-daemon will be a dependency of libvirt-daemon-system so no need to install the former directly [08:51] niub: and libvirt-clients you only need if you want the front end on the same system (like virsh and such) [08:52] if you plan to manage is only from remove you don't need -clients [09:27] wgrant: did the riscv64 build configs changes, after that one was built initially? https://launchpadlibrarian.net/488638702/buildlog_ubuntu-groovy-riscv64.guile-2.2_2.2.7+1-5ubuntu1_BUILDING.txt.gz [09:32] ddstreet: rbalint: the systemd autopkgtest success rate has dropped a lot in groovy it seems [09:32] we all know they are complex and a bit flaky, but recently it seems worse - is there any work in progress to improve that again? [10:10] cpaelzer: ack, thx [10:29] arr, I have continuous issue with the autopkgtests of open-iscsi, caspar and ubuntu-image [10:30] They all work well when run in a local autopkgtest vm [10:30] due to that it is hard to debug, but OTOH wrong to mask them as badtest [10:30] I was wondering if there is anything known going on with the infrastructure that might cause this? [10:30] if so let me know [10:31] there are no open bugs against any of these packages explaining it [10:31] rafaeldtinoco: ^^ for open-iscsi and in case this came up on +1 duty as a topic [10:31] * cpaelzer dives into test logs in case there is anything that helps ... [10:49] hmm, one could actually be a memory shortage [10:49] I keep forgetting where we flag tests as "huge" [10:50] but it seems these days all tests are huge, can tat be true [10:50] only the huge queue has entries atm on http://autopkgtest.ubuntu.com/running [10:50] and they all use --flavour autopkgtest [10:51] ok I was able to prove for one of the tests that memory shortage is the trigger (512M fails, 2G works) [10:51] cpaelzer: hm what's casper doing [10:51] Laney: whenever it comes to autopktest infra you seem to know the answer - I beg your pardon for pinging you, but do you happen to know? ^^ [10:52] mwhudson: I was debugging down to the bone open-iscsi so far which surely is lack of memory [10:52] mwhudson: I can look at caspar next and let you know in a bit [10:53] mwhudson: last 7 on https://autopkgtest.ubuntu.com/packages/c/casper/groovy/amd64 all failed [10:55] cpaelzer: try running with -- qemu ... --ram-size 1536, that's what the instances get [10:55] 'huge' doesn't mean that it runs on a bigger instance [10:55] ok will do a check at this size .. [10:55] I thought it meant that in the past [10:55] what does it still mean - longer timeouts? [10:56] nope [10:56] it means the package triggered more than 20 tests for that arch [10:56] oh, TIL - thanks! [11:00] cpaelzer: hm is it always failing in the same place? are you sure it isn't actually a qemu regression? [11:00] anyway i'm going to bed, ttfn :) [11:00] not a fail to work, but I was starting to wonder if memory consumption might have increased [11:01] sleep well mwhudson [11:01] beacuse it seems all issues I ahve left are tests running guests crashing for memory which is suspicious at least [11:05] The tests all seem to run with TCG which isn't too much tested otherwise [11:05] I think I'm onto something - thanks for listening mwhudson and Laney [11:06] yes system emulation [11:06] apparently nested virt will be a thing soon and we can stop testing system emulation soon? :) [11:31] waveform, sil2100, https://github.com/snapcore/pi-gadget/issues/48 one for you ... [11:31] (trivial to fix) [11:31] ogra_, ta - I'll take a look [11:32] thx ! [11:32] ogra_, hmmm - quick question, should I add all the optional extra UARTs the pi4 introduced while I'm at it? [11:32] yeah, definitely [11:32] yeah, figured that might be worth it while adding these bits [11:32] okay === nikow2 is now known as nikow [12:23] @pilot in === udevbot changed the topic of #ubuntu-devel to: Archive: Open | 20.04 Released! | Devel of Ubuntu (not support) | Build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of Trusty-Focal | If you can't send messages here, authenticate to NickServ first | Patch Pilots: rafaeldtinoco [13:33] kanashiro: around? Your updated vagrant SRU looks fine, though do you want to use -v2.2.6+dfsg-2ubuntu1 for the source build so that the changes file incorporates both changes? [13:33] If you're not around I can just accept - it's not important in this case [13:34] (since the same bug number is in the previous upload so Launchpad-Bugs-Fixed is not affected) [13:35] rbasak: if this is something you really want I can do it later today [13:36] kanashiro: it's not necessary. I just wanted to give you the option :) [13:36] (sometimes it is for this sort of case but not this time) [13:38] rbasak: so I'd say you can go ahead with the current version, context switch for this minor thing does not worth IMO :) [13:38] Sure, OK [14:19] sil2100: hi, about bileto, [14:20] sil2100: I'm not seeing the kopanocore tests themselves in https://bileto.ubuntu.com/excuses/4142/groovy.html [14:20] and it does have dep8 tests [14:22] same for python-certbot-apache: https://bileto.ubuntu.com/excuses/4144/groovy.html [14:22] it says tests are green, but I see no tests run [14:43] ahasenack: we don't show all-green lines any more [14:52] Laney: oh [14:52] Laney: is there a way to see the log anyway, or is it discarded? [14:54] ahasenack: yeah, you can go to https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-groovy-ci-train-ppa-service-4144/?format=plain [14:55] then substitute the last part for the entries there [15:38] If I use a snap application that uses xdg-portals, snapd starts xdg-portals-gtk and performs my request. When this occurs, gnome-shell knows to "lock" the dialog that appears to the window that it's coming from and dims the application. How does the shell know to do this? What code handles popping up the dialogs modally? [15:38] This seems interesting to me because the application is in confinement and the dialog opening modally is outside [15:40] I ask because we need to replicate this behavior in Lomiri and having some direction to how someone else did it would be nice. But if I'm in the wrong place please point at me and laugh. [15:44] UnivrslSuprBox: If you don't get an answer here, maybe ask in like #gnome on GimpNet https://wiki.gnome.org/Community/GettingInTouch/IRC#Community_.26_Core_Experience [15:44] UnivrslSuprBox: or try #ubuntu-desktop [15:44] Though they should be reading here too ... [15:54] Thanks! I'll idle some time longer. [17:34] cpaelzer, regarding the systemd autopkgtests there are a few valid regressiond being fixed in latest uploads and infra plays a role i think [17:34] cpaelzer, do you observe flaky tests running them locally in qemu? [18:38] LocutusOfBorg, mind triggering the following node-* tests? https://paste.ubuntu.com/p/FxPzTd5wtQ/ [18:42] oSoMoN: doing... [18:42] cheers [18:43] oSoMoN: LocutusOfBorg: . [19:36] hi, I have a config file I want to move from package A to a new package B, so that's case #7 of https://wiki.debian.org/PackageTransition [19:36] my problem is how to preserve changes the user might have made to that config file [19:36] I initially just added A.maintscript with rm_conffile, for the case if the user just upgrades A, I want the config file gone (if unchanged). That works [19:37] but if I then install B, the default config file is installed, and it won't have the changes the user made when he had A version 1 installed [19:38] and if A ad changes, I end up with the A config file backed up, but the new config file from B as the default [19:38] *had changes [21:09] cpaelzer: just fyio, sergiodj and I are working in qemu regressions specially because of nettle (depending on its migration) [21:10] and nettle will unblock many stuff from migrations [21:10] currently we're checking qemu->casper [21:10] and then qemu->open-iscsi and qemu->systemd [21:49] ls [21:59] casper works fine locally :\, Im investigating lintian issues in all arches (arm64 / ppc64el - same test, s390x - other test) .. etc [21:59] more fun tomorrow [21:59] @pilot out === udevbot changed the topic of #ubuntu-devel to: Archive: Open | 20.04 Released! | Devel of Ubuntu (not support) | Build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of Trusty-Focal | If you can't send messages here, authenticate to NickServ first | Patch Pilots: [22:44] xnox: wanna talk about imsm a bit [22:44] ? [22:44] Yes. [22:44] xnox: so you do ... stuff in the bios [22:45] I have next call at half-past the hour. [22:45] which results in a "mdadm container" in the OS [22:45] xnox: voice or here? [22:45] i don't mind [22:45] mwhudson: here is fine. [22:45] mwhudson: if you want voice, i will need to go outside. [22:45] ok [22:45] here is fine for me too [22:46] you can then slice this container up into actual block devices [22:46] you can choose a different raid level for each slice [22:46] but each slice spans all disks in the container [22:46] accurate so far? ^ [22:46] more or less. [22:47] can you also create these "slices" (what's the proper word?) in the bios? i wasn't sure about that [22:47] container => mark and slur-up all the disks to make them ismsm. Think of it as doing "pvcreate" on them, to say they will be used for Intel Raid. [22:48] there is md127 that correlates to container, and it's mostly to query the controller that firmware and userspace agrees which disks are under intel raid management. [22:48] so the slices..... those just look like normal RAID devices, that yeah, can be of any size and parity as needed. [22:48] kind of like lvcreate [22:49] but they appear on the host as "md126, md125, md124" devices. [22:49] with their "physical" volume claiming to be like the container (md127) or like all the disks. [22:49] but yeah the parent device behind them is kind of fake. [22:50] terminology wise, one creates RAID device [22:50] inside a RAID container [22:50] that happens to consume hard-drives that must be connected to the right ports on the motherboard that have the raid stuff. [22:51] mwhudson: https://www.intel.co.uk/content/www/uk/en/support/articles/000005789/technologies.html [22:51] mwhudson: here are the bios docs, and the picture how it normally it looks in bios, if one boots with "ctrl+i" [22:51] so yeah they say "Create RAID Volume" [22:51] as if one is creating a normal mdadm raid volume. [22:52] xnox: can you partition these raid devices? can you put the ESP on one? [22:53] mwhudson: yes, yes. [22:54] mwhudson: it is more normal to create raid volumes, rather than partitioning them. [22:54] xnox: also is there any difference from our point of view between expensive server VROC and consumer laptop RST? [22:54] mwhudson: i.e md126 esp, md125 rootfs [22:54] mwhudson: rather than md126p1 esp, md126p2 rootfs [22:54] xnox: how do you mark the ESP if it's a raid device? [22:54] mwhudson: create md126, format it with fat32 and put ESP label on it. [22:55] mwhudson: create uefi boot entry by fsuuid as usual. [22:55] usually the label is a partition type code in the GPT [22:55] but ok [22:55] mwhudson: firmware has UEFI intel matrix raid driver, meaning that UEFI firmware has drive(....) for the container, and the raid volumes created. [22:55] hence it can find ESP on them, as if they are were real hardware raid disks. [22:56] mwhudson: sure, just do whatever one would do to it normally. Pretend that md126 is your typical sda1 [22:57] xnox: i don't know how that works given there is no partition table in this case but well, a problem for another day [22:57] mwhudson: i didn't get on that system they shared with us, i think i can live convert it to what it should look like from an initrd. [22:57] mwhudson: you can create partition table too, if that's better. I think that's what we did in Rackspace OnMetal cloud. [22:57] mwhudson: i.e. create RAID1 all disks, all the things. [22:58] which md126, create GPT on it, create partitions on it, etc. [22:58] that's what d-i did too [22:58] and everyone at Intel were happy with that, because it just worked. [22:59] mwhudson: it feels to me that in essence, one has to call `sudo mdadm --detail --detail-platform` if that has raid capabilities [22:59] mwhudson: when user creates raid, it should assemble container, and use mdadm commands with metadata set to imsm format. [22:59] xnox: so d-i basically creates a single device in the container and lets partman-auto have at taht? [23:00] mwhudson: yes. [23:00] so subiquity should (a) hide any disks that are part of a container [23:00] mwhudson: i think for raid0 one can even fake the imsm controller in a vm. [23:01] (b) do something like what d-i does when you choose the container in the guided view [23:01] xnox: ah i was wondering if there was qemu support for all this [23:01] mwhudson: often when people buy servers from intel preconfigured with raid1 all disks. [23:01] and if one has it auto-assembled, guided should offer to "autopartition md127 Intel Raid" [23:02] mwhudson: or people can choose to destroy md127 continer, and then all disks become eligible for manual partitioning. [23:02] i guess the container needs to be a new kind of thing in the UI [23:02] xnox: oh, i assumed that would be something you'd have to do in the bios [23:02] mwhudson: it's either or. [23:02] can you create a container from the linux side too? [23:02] mwhudson: yes. [23:02] oh [23:03] what is the benefit of this vs bog standard md raid again? :) [23:03] mwhudson: on-disk format is very simple. The VROC key / controller, just has CPU assisted quick parity calculations and recovery/data moving, such that CPU can do that without actually pushing data up to kernel and back down to disk. [23:03] i guess performance and the firmware support for reading from the raid devices [23:04] mwhudson: and to use the CPU assisted calculations, one has ensure that the disk is in the format that cpu knows to read directly. [23:04] oh being able to do recovery behind the scenes sounds pretty nice [23:04] mwhudson: benefit is that they wrote proprietary/BSD UEFI driver that Microsoft signed. [23:04] mwhudson: actually recovery still needs to be started by something, and monitored. [23:05] mwhudson: what you will find when container is assembled is that mdmon@md127.service is started which needs to stay alive until after pivot to finalrd. [23:05] mwhudson: or firmware has equivalent. [23:05] xnox: oh ok [23:05] mwhudson: it's a small agent basically to get cpu notifications, and push down messages. "Yes, borken, start repair" [23:05] "is it done yet?" [23:05] "is it done yet?" [23:05] "is it done yet?" [23:06] "ok, pretending it is healthy now" [23:06] fake things in /proc/mdstat [23:06] mwhudson: let me try to get onto that machine [23:07] xnox: so on a imsm-capable system (how do you detect that) subiquity should support "mdraid containers" which can be made out of disks (not partitions) and can have raid devices created within them [23:07] i guess "disks connected to the right ports" [23:08] can you create multiple containers out of disjoint subsets of candidate disks? i guess so [23:13] you can have many controllers too [23:13] typically and different disks connected to each one [23:13] sudo mdadm --detail --detail-platform [23:13] is what you call to check if there are any controllers that support intel matrix raid. [23:14] that call can be done on any machine [23:18] iDRAC thing is scary [23:44] ah so disks are connected to a controller [23:44] xnox: sorry attention wandered for a bit there [23:45] so if you're creating a container, you then have to choose which controller (if >1) and then can choose disks attached to that controller