/srv/irclogs.ubuntu.com/2020/07/15/#ubuntu-devel.txt

Unit193Kind of loads better/faster in a browser in that case, for a little it was just faster to use w3m for me. :300:03
mwhudsonxnox: it's that it no longer includes autopkgtest lines with no regressions i think00:27
sarnoldor always had regressions, right?00:29
sarnoldor was that a different page..00:29
xnoxsarnold:  it no longer includes thousands of lines of perl-* Passed Passed Passed Passed Passed Passed (x100000000 times)00:41
Unit193:D00:41
sarnoldxnox: Passed!00:41
Unit193Thankgoodness the passed is in the past.00:41
sarnoldxnox: is that because you broke all the perl things? or removed them? :)00:42
=== pieq_ is now known as pieq
Laneyxnox: mwhudson: thanks (mainly on behalf of elbrus - go tell him in #debci) :>08:05
cpaelzerniub: libvirt-bin was replaced in like xenial or so08:30
cpaelzerniub: the one replacing it will be libvirt-daemon-system08:30
cpaelzerwithut the -system suffix you just get the binaries but not the service/system integration08:31
niubcpaelzer: so I should install libvirt-daemon libvirt-daemon-system libvirt-clients, am I right?08:32
cpaelzerniub: libvirt-daemon will be a dependency of libvirt-daemon-system so no need to install the former directly08:51
cpaelzerniub: and libvirt-clients you only need if you want the front end on the same system (like virsh and such)08:51
cpaelzerif you plan to manage is only from remove you don't need -clients08:52
dokowgrant: did the riscv64 build configs changes, after that one was built initially? https://launchpadlibrarian.net/488638702/buildlog_ubuntu-groovy-riscv64.guile-2.2_2.2.7+1-5ubuntu1_BUILDING.txt.gz09:27
cpaelzerddstreet: rbalint: the systemd autopkgtest success rate has dropped a lot in groovy it seems09:32
cpaelzerwe all know they are complex and a bit flaky, but recently it seems worse - is there any work in progress to improve that again?09:32
niubcpaelzer: ack, thx10:10
cpaelzerarr, I have continuous issue with the autopkgtests of open-iscsi, caspar and ubuntu-image10:29
cpaelzerThey all work well when run in a local autopkgtest vm10:30
cpaelzerdue to that it is hard to debug, but OTOH wrong to mask them as badtest10:30
cpaelzerI was wondering if there is anything known going on with the infrastructure that might cause this?10:30
cpaelzerif so let me know10:30
cpaelzerthere are no open bugs against any of these packages explaining it10:31
cpaelzerrafaeldtinoco: ^^ for open-iscsi and in case this came up on +1 duty as a topic10:31
* cpaelzer dives into test logs in case there is anything that helps ...10:31
cpaelzerhmm, one could actually be a memory shortage10:49
cpaelzerI keep forgetting where we flag tests as "huge"10:49
cpaelzerbut it seems these days all tests are huge, can tat be true10:50
cpaelzeronly the huge queue has entries atm on http://autopkgtest.ubuntu.com/running10:50
cpaelzerand they all use --flavour autopkgtest10:50
cpaelzerok I was able to prove for one of the tests that memory shortage is the trigger (512M fails, 2G works)10:51
mwhudsoncpaelzer: hm what's casper doing10:51
cpaelzerLaney: whenever it comes to autopktest infra you seem to know the answer - I beg your pardon for pinging you, but do you happen to know? ^^10:51
cpaelzermwhudson: I was debugging down to the bone open-iscsi so far which surely is lack of memory10:52
cpaelzermwhudson: I can look at caspar next and let you know in a bit10:52
cpaelzermwhudson: last 7 on https://autopkgtest.ubuntu.com/packages/c/casper/groovy/amd64 all failed10:53
Laneycpaelzer: try running with -- qemu ... --ram-size 1536, that's what the instances get10:55
Laney'huge' doesn't mean that it runs on a bigger instance10:55
cpaelzerok will do a check at this size ..10:55
cpaelzerI thought it meant that in the past10:55
cpaelzerwhat does it still mean - longer timeouts?10:55
Laneynope10:56
Laneyit means the package triggered more than 20 tests for that arch10:56
cpaelzeroh, TIL - thanks!10:56
mwhudsoncpaelzer: hm is it always failing in the same place? are you sure it isn't actually a qemu regression?11:00
mwhudsonanyway i'm going to bed, ttfn :)11:00
cpaelzernot a fail to work, but I was starting to wonder if memory consumption might have increased11:00
cpaelzersleep well mwhudson11:01
cpaelzerbeacuse it seems all issues I ahve left are tests running guests crashing for memory which is suspicious at least11:01
cpaelzerThe tests all seem to run with TCG which isn't too much tested otherwise11:05
cpaelzerI think I'm onto something - thanks for listening mwhudson and Laney11:05
mwhudsonyes system emulation11:06
mwhudsonapparently nested virt will be a thing soon and we can stop testing system emulation soon? :)11:06
ogra_waveform, sil2100, https://github.com/snapcore/pi-gadget/issues/48 one for you ...11:31
ogra_(trivial to fix)11:31
waveformogra_, ta - I'll take a look11:31
ogra_thx !11:32
waveformogra_, hmmm - quick question, should I add all the optional extra UARTs the pi4 introduced while I'm at it?11:32
ogra_yeah, definitely11:32
waveformyeah, figured that might be worth it while adding these bits11:32
waveformokay11:32
=== nikow2 is now known as nikow
rafaeldtinoco@pilot in12:23
=== udevbot changed the topic of #ubuntu-devel to: Archive: Open | 20.04 Released! | Devel of Ubuntu (not support) | Build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of Trusty-Focal | If you can't send messages here, authenticate to NickServ first | Patch Pilots: rafaeldtinoco
rbasakkanashiro: around? Your updated vagrant SRU looks fine, though do you want to use -v2.2.6+dfsg-2ubuntu1 for the source build so that the changes file incorporates both changes?13:33
rbasakIf you're not around I can just accept - it's not important in this case13:33
rbasak(since the same bug number is in the previous upload so Launchpad-Bugs-Fixed is not affected)13:34
kanashirorbasak: if this is something you really want I can do it later today13:35
rbasakkanashiro: it's not necessary. I just wanted to give you the option :)13:36
rbasak(sometimes it is for this sort of case but not this time)13:36
kanashirorbasak: so I'd say you can go ahead with the current version, context switch for this minor thing does not worth IMO :)13:38
rbasakSure, OK13:38
ahasenacksil2100: hi, about bileto,14:19
ahasenacksil2100: I'm not seeing the kopanocore tests themselves in https://bileto.ubuntu.com/excuses/4142/groovy.html14:20
ahasenackand it does have dep8 tests14:20
ahasenacksame for python-certbot-apache: https://bileto.ubuntu.com/excuses/4144/groovy.html14:22
ahasenackit says tests are green, but I see no tests run14:22
Laneyahasenack: we don't show all-green lines any more14:43
ahasenackLaney: oh14:52
ahasenackLaney: is there a way to see the log anyway, or is it discarded?14:52
Laneyahasenack: yeah, you can go to https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-groovy-ci-train-ppa-service-4144/?format=plain14:54
Laneythen substitute the last part for the entries there14:55
UnivrslSuprBoxIf I use a snap application that uses xdg-portals, snapd starts xdg-portals-gtk and performs my request. When this occurs, gnome-shell knows to "lock" the dialog that appears to the window that it's coming from and dims the application. How does the shell know to do this? What code handles popping up the dialogs modally?15:38
UnivrslSuprBoxThis seems interesting to me because the application is in confinement and the dialog opening modally is outside15:38
UnivrslSuprBoxI ask because we need to replicate this behavior in Lomiri and having some direction to how someone else did it would be nice. But if I'm in the wrong place please point at me and laugh.15:40
juliankUnivrslSuprBox: If you don't get an answer here, maybe ask in like #gnome on GimpNet https://wiki.gnome.org/Community/GettingInTouch/IRC#Community_.26_Core_Experience15:44
juliankUnivrslSuprBox: or try #ubuntu-desktop15:44
juliankThough they should be reading here too ...15:44
UnivrslSuprBoxThanks! I'll idle some time longer.15:54
rbalintcpaelzer, regarding the systemd autopkgtests there are a few valid regressiond being fixed in latest uploads and infra plays a role i think17:34
rbalintcpaelzer, do you observe flaky tests running them locally in qemu?17:34
oSoMoNLocutusOfBorg, mind triggering the following node-* tests? https://paste.ubuntu.com/p/FxPzTd5wtQ/18:38
ginggsoSoMoN: doing...18:42
oSoMoNcheers18:42
ginggsoSoMoN: LocutusOfBorg: .18:43
ahasenackhi, I have a config file I want to move from package A to a new package B, so that's case #7 of https://wiki.debian.org/PackageTransition19:36
ahasenackmy problem is how to preserve changes the user might have made to that config file19:36
ahasenackI initially just added A.maintscript with rm_conffile, for the case if the user just upgrades A, I want the config file gone (if unchanged). That works19:36
ahasenackbut if I then install B, the default config file is installed, and it won't have the changes the user made when he had A version 1 installed19:37
ahasenackand if A ad changes, I end up with the A config file backed up, but the new config file from B as the default19:38
ahasenack*had changes19:38
rafaeldtinococpaelzer: just fyio, sergiodj and I are working in qemu regressions specially because of nettle (depending on its migration)21:09
rafaeldtinocoand nettle will unblock many stuff from migrations21:10
rafaeldtinococurrently we're checking qemu->casper21:10
rafaeldtinocoand then qemu->open-iscsi and qemu->systemd21:10
rafaeldtinocols21:49
rafaeldtinococasper works fine locally :\, Im investigating lintian issues in all arches (arm64 / ppc64el - same test, s390x - other test) .. etc21:59
rafaeldtinocomore fun tomorrow21:59
rafaeldtinoco@pilot out21:59
=== udevbot changed the topic of #ubuntu-devel to: Archive: Open | 20.04 Released! | Devel of Ubuntu (not support) | Build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of Trusty-Focal | If you can't send messages here, authenticate to NickServ first | Patch Pilots:
mwhudsonxnox: wanna talk about imsm a bit22:44
mwhudson?22:44
xnoxYes.22:44
mwhudsonxnox: so you do ... stuff in the bios22:44
xnoxI have next call at half-past the hour.22:45
mwhudsonwhich results in a "mdadm container" in the OS22:45
mwhudsonxnox: voice or here?22:45
mwhudsoni don't mind22:45
xnoxmwhudson:  here is fine.22:45
xnoxmwhudson:  if you want voice, i will need to go outside.22:45
mwhudsonok22:45
mwhudsonhere is fine for me too22:45
mwhudsonyou can then slice this container up into actual block devices22:46
mwhudsonyou can choose a different raid level for each slice22:46
mwhudsonbut each slice spans all disks in the container22:46
mwhudsonaccurate so far? ^22:46
xnoxmore or less.22:46
mwhudsoncan you also create these "slices" (what's the proper word?) in the bios? i wasn't sure about that22:47
xnoxcontainer => mark and slur-up all the disks to make them ismsm. Think of it as doing "pvcreate" on them, to say they will be used for Intel Raid.22:47
xnoxthere is md127 that correlates to container, and it's mostly to query the controller that firmware and userspace agrees which disks are under intel raid management.22:48
xnoxso the slices..... those just look like normal RAID devices, that yeah, can be of any size and parity as needed.22:48
xnoxkind of like lvcreate22:48
xnoxbut they appear on the host as "md126, md125, md124" devices.22:49
xnoxwith their "physical" volume claiming to be like the container (md127) or like all the disks.22:49
xnoxbut yeah the parent device behind them is kind of fake.22:49
xnoxterminology wise, one creates RAID device22:50
xnoxinside a RAID container22:50
xnoxthat happens to consume hard-drives that must be connected to the right ports on the motherboard that have the raid stuff.22:50
xnoxmwhudson:  https://www.intel.co.uk/content/www/uk/en/support/articles/000005789/technologies.html22:51
xnoxmwhudson:  here are the bios docs, and the picture how it normally it looks in bios, if one boots with "ctrl+i"22:51
xnoxso yeah they say "Create RAID Volume"22:51
xnoxas if one is creating a normal mdadm raid volume.22:51
mwhudsonxnox: can you partition these raid devices? can you put the ESP on one?22:52
xnoxmwhudson:  yes, yes.22:53
xnoxmwhudson:  it is more normal to create raid volumes, rather than partitioning them.22:54
mwhudsonxnox: also is there any difference from our point of view between expensive server VROC and consumer laptop RST?22:54
xnoxmwhudson:  i.e md126 esp, md125 rootfs22:54
xnoxmwhudson:  rather than md126p1 esp, md126p2 rootfs22:54
mwhudsonxnox: how do you mark the ESP if it's a raid device?22:54
xnoxmwhudson:  create md126, format it with fat32 and put ESP label on it.22:54
xnoxmwhudson:  create uefi boot entry by fsuuid as usual.22:55
mwhudsonusually the label is a partition type code in the GPT22:55
mwhudsonbut ok22:55
xnoxmwhudson:  firmware has UEFI intel matrix raid driver, meaning that UEFI firmware has drive(....) for the container, and the raid volumes created.22:55
xnoxhence it can find ESP on them, as if they are were real hardware raid disks.22:55
xnoxmwhudson:  sure, just do whatever one would do to it normally. Pretend that md126 is your typical sda122:56
mwhudsonxnox: i don't know how that works given there is no partition table in this case but well, a problem for another day22:57
xnoxmwhudson:  i didn't get on that system they shared with us, i think i can live convert it to what it should look like from an initrd.22:57
xnoxmwhudson:  you can create partition table too, if that's better. I think that's what we did in Rackspace OnMetal cloud.22:57
xnoxmwhudson:  i.e. create RAID1 all disks, all the things.22:57
xnoxwhich md126, create GPT on it, create partitions on it, etc.22:58
xnoxthat's what d-i did too22:58
xnoxand everyone at Intel were happy with that, because it just worked.22:58
xnoxmwhudson:  it feels to me that in essence, one has to call `sudo mdadm --detail --detail-platform` if that has raid capabilities22:59
xnoxmwhudson:  when user creates raid, it should assemble container, and use mdadm commands with metadata set to imsm format.22:59
mwhudsonxnox: so d-i basically creates a single device in the container and lets partman-auto have at taht?22:59
xnoxmwhudson:  yes.23:00
mwhudsonso subiquity should (a) hide any disks that are part of a container23:00
xnoxmwhudson:  i think for raid0 one can even fake the imsm controller in a vm.23:00
mwhudson(b) do something like what d-i does when you choose the container in the guided view23:01
mwhudsonxnox: ah i was wondering if there was qemu support for all this23:01
xnoxmwhudson:  often when people buy servers from intel preconfigured with raid1 all disks.23:01
xnoxand if one has it auto-assembled, guided should offer to "autopartition md127 Intel Raid"23:01
xnoxmwhudson:  or people can choose to destroy md127 continer, and then all disks become eligible for manual partitioning.23:02
mwhudsoni guess the container needs to be a new kind of thing in the UI23:02
mwhudsonxnox: oh, i assumed that would be something you'd have to do in the bios23:02
xnoxmwhudson:  it's either or.23:02
mwhudsoncan you create a container from the linux side too?23:02
xnoxmwhudson:  yes.23:02
mwhudsonoh23:02
mwhudsonwhat is the benefit of this vs bog standard md raid again? :)23:03
xnoxmwhudson: on-disk format is very simple. The VROC key / controller, just has CPU assisted quick parity calculations and recovery/data moving, such that CPU can do that without actually pushing data up to kernel and back down to disk.23:03
mwhudsoni guess performance and the firmware support for reading from the raid devices23:03
xnoxmwhudson:  and to use the CPU assisted calculations, one has ensure that the disk is in the format that cpu knows to read directly.23:04
mwhudsonoh being able to do recovery behind the scenes sounds pretty nice23:04
xnoxmwhudson:  benefit is that they wrote proprietary/BSD UEFI driver that Microsoft signed.23:04
xnoxmwhudson:  actually recovery still needs to be started by something, and monitored.23:04
xnoxmwhudson:  what you will find when container is assembled is that mdmon@md127.service is started which needs to stay alive until after pivot to finalrd.23:05
xnoxmwhudson:  or firmware has equivalent.23:05
mwhudsonxnox: oh ok23:05
xnoxmwhudson:  it's a small agent basically to get cpu notifications, and push down messages. "Yes, borken, start repair"23:05
xnox"is it done yet?"23:05
xnox"is it done yet?"23:05
xnox"is it done yet?"23:05
xnox"ok, pretending it is healthy now"23:06
xnoxfake things in /proc/mdstat23:06
xnoxmwhudson:  let me try to get onto that machine23:06
mwhudsonxnox: so on a imsm-capable system (how do you detect that) subiquity should support "mdraid containers" which can be made out of disks (not partitions) and can have raid devices created within them23:07
mwhudsoni guess "disks connected to the right ports"23:07
mwhudsoncan you create multiple containers out of disjoint subsets of candidate disks? i guess so23:08
xnoxyou can have many controllers too23:13
xnoxtypically and different disks connected to each one23:13
xnoxsudo mdadm --detail --detail-platform23:13
xnoxis what you call to check if there are any controllers that support intel matrix raid.23:13
xnoxthat call can be done on any machine23:14
xnoxiDRAC thing is scary23:18
mwhudsonah so disks are connected to a controller23:44
mwhudsonxnox: sorry attention wandered for a bit there23:44
mwhudsonso if you're creating a container, you then have to choose which controller (if >1) and then can choose disks attached to that controller23:45

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!