=== gurmble is now known as grumble [02:09] hmm [02:10] sbuild on bionic for me is behaving like debian builders: only considering the first option if there are alternatives [02:10] ubuntu builders aren't like that are they? [02:10] or am i misremembering [02:19] ah yes $resolve_alternatives = 1 in sbuildrc in launchpad-buildd [02:30] mwhudson: file a wishlist bug for https://launchpad.net/ubuntu/+source/sbuild-launchpad-chroot ? [02:32] jbicha: i don't think that's used any more [02:32] i guess i could and manybe should file a bug asking for it to be removed from the archive [02:37] mwhudson: please don't. I use it [02:37] unless you have a reason why I shouldn't… [02:37] maybe i misunderstand what it is [02:37] i know for ages launchpad used a fork of sbuild [02:38] but i don't think it does any more [02:38] it's a tool to help set up sbuild locally to be a bit more like the launchpad builders [02:38] ah ok it's not that === cpaelzer_ is now known as cpaelzer [05:41] jbicha: btw, I'm following after pylint stuff in debian. do expect me to follow up in ubuntu as well (I saw you synced at least two related packages - including one that ftbfs because you didn't sync another one) [05:44] (and py3.7 stuff in general) [06:27] Huh. [06:27] Why is Mir still in main? [06:28] I thought that packages that were not seeded and did not have any rdepends in main were auto-demoted to universe? [06:28] Or, at least, would show up on component-mismatches? [06:36] Oh, it's in supported. [06:44] you're not off the hook ;) === abeato_ is now known as abeato [09:29] jbicha: if you have time, can you maybe initiate libundead to be dropped from armhf? As I wrote in my email, this should unblock the LDC migration we are receiving mails about [09:30] the culprit is an LLVM bug tracked as https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=908670 which appoarently won't be fixed in time [09:30] Debian bug 908670 in llvm-toolchain-6.0 "libundead: ftbfs on armhf" [Serious,Open] [09:31] (nothing depends on the library on armhf AFAIK, so it's safe to drop - debian has an arch-specific RM request already [09:31] ) [09:31] :P [10:26] alterjsive: any updates on https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1794318 ? [10:26] Launchpad bug 1794318 in mdadm (Ubuntu) "unable to boot after installing vagrant" [Undecided,Incomplete] [11:05] sladen: I 'm curious what would happen if I would install a new kernel right after the installation, without upgrading new packages [11:21] tkamppeter: I'm debating backporting ghostscript 9.25 to our stable releases all the way back to trusty. Are there any known issues with 9.25, and do you think I'll hit any problems doing so? [11:22] alterjsive: yeah we are curious too. But, in the mean-time, please provide the debugging information needed [11:24] sladen: I don't have much time now but if you have a quick question, I can awnser it. I will look at the information request in the launchpad issue tonight [11:31] alterjsive: a quick question is, how was the IMSM RAID setup. Was this done purely in the BIOS (before the original-original any Ubuntu install had happened) [11:32] alterjsive: mdadm --detail-platform should show the IMSM config [11:32] sladen: yes it's configured in the bios only. My understanding is that fake raid means configured by the bios. [11:34] sladen: https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1794318/comments/14 [11:34] Launchpad bug 1794318 in mdadm (Ubuntu) "unable to boot after installing vagrant" [Undecided,Incomplete] [11:34] alterjsive: and on this machine it was *only* configured via the BIOS [11:34] sladen: yes [11:35] alterjsive: thank you (x2). Please could you also upload the output of mdadm --assemble --scan [11:35] it's empty [11:38] alterjsive: mdadm --examine /dev/sd[a-z] [11:40] https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1794318/comments/16 [11:40] Launchpad bug 1794318 in mdadm (Ubuntu) "unable to boot after installing vagrant" [Undecided,Incomplete] [11:41] alterjsive, "fake raid means configured by the bios" yes and no. mdadm fully knows how to operate on the intel drives; and can create them, change, monitor them, etc. [11:41] alterjsive, cause it knows how to talk to the intel raid chipset.... [11:41] bios can configure it too usually [11:42] cool [11:42] alterjsive, it looks like your motherboard / bios is borked up. [11:42] alterjsive, and you should reconfigure reshape from raid1 to raid0 [11:43] borked up? [11:43] alterjsive, https://paste.ubuntu.com/p/rsVcy6Vd22/ [11:43] is output for me [11:43] granted my firmware version appears to be old/out of date (no longer receiving OEM motherboard firmware updates) [11:44] but I do have RAID Levels : raid0 raid1 raid10 raid5 [11:44] alterjsive, possibly poke things in bios, to see why raid1 is not available.... or like disabled.... [11:44] ahhh, okay, the chipset is saying it can't do raid0 [11:45] but raid1 was clearly setup, and is reported in the actual mdadm --examine scan. And was (probably) working just fine [11:45] It's currently configured for mirror raid [11:45] alterjsive, if you are using the brand new VROC (you don't appear to, because you don't have nvme) note that one has to buy an optional hardware key and intel hard-drives to enable NVMe [11:45] I can select raid 0 too [11:45] but some new check/validation in the new version of mdadm is then deciding that because raid1 doesn't apepar in the list, the raid1 mapper instance can't be setup [11:45] alterjsive, sladen - configured yes.... but that's simply the metadata on the drives. i believe it's not actually syncing across the two as one expects from raid1 [11:46] sladen, which is safe. [11:46] alterjsive, you don't appear to have raid1 capability - or the hard-drives you have plugged in are discredited from able to do raid1 [11:47] xnox: i had to select a master and a slave. My hard disks appear to be exactly the same [11:47] alterjsive, i guess the best you can do, is seek support from your motherboard manufacturer. [11:47] alterjsive: please can you take a photo from the BIOS of what *it* thinks the capabilities are [11:47] alterjsive, and they plugged into the intel raid enabled ports? at least on my motherboard only some ports are hooked up. [11:48] alterjsive: so that we can see if there's a difference between what the BIOS sees of its own capabilities, and what mdadm --detail-platform sees as the capabilities [11:48] sladen: I did a raid checkup using windows intel raid drivers. it said it was functioning normally [11:48] you have windows installed too on this raid? [11:48] horum. [11:49] no, on a diffrent disk, but I formatted my hdd to ext4 recently and strangly it messed up my windows boot. It's no longer listed. [11:49] alterjsive: canother possibility, do certain RAID mode change when the two newer disks were plugged in. Where they plugged in *after* the initial IMSM setup of the first two disks [11:51] sda, sdb and sdd are ssd's. sdc is hdd. sda & sdb are in mirror raid , sda is a master and sdb is slave [11:52] I installed windows on sdd and when I repartionioned sdc , my windows boot was messed up [11:53] yeah [11:53] and installing windows messed up your raid probably [11:53] you did disconnect sda & sdb from the system, when isntalling windows into sdd? [11:53] xnox: no, I didn't know it was this distructive [11:53] i've never managed to dual boot windows, unless i keep my intel raid unplugged when touching windows. [11:54] alterjsive: are you available for more in-depth debugging. (1) reboot, take picture of BIOS config. (2) unplug sdc and sdd. (2) take picture of BIOS config. (3) boot linux, take screenshot of mdadm --detail-platform [11:54] alterjsive, as a fun experiment. power down unplug sdc and sdd [11:54] alterjsive, boot into bios, resync the raid array in the bios - until it is in sync. [11:54] alterjsive, then continue normal boot [11:55] alterjsive, check status with mdadm --detail-platform and --examine [11:55] alterjsive, if it's still not assembled / detail-platform gives you raid0 only, you need to contact better support - ie. mdadm upstream mailing list & your motherboard support/warranty. [11:55] I have other projects too urg, i've just bougth 4 servers for my openshift cluster :))) [11:56] ok I'll see tonight [11:56] * sladen shrugs [11:56] possibly need to check if there are any firmware upgrades [11:56] and possibly replace these..... [11:56] alterjsive, well, you can launch OnMetal instances in Rackspace cloud -> the v2 ones use Intel Raid by default. [11:57] alterjsive, and mdadm does work there on variety on machines. and locally for me.... it does smell like your chipset/motherboard is borked. [11:57] alterjsive, or like forget about this intel raid; and reinstall just using regular linux raid. [11:57] is the best i can suggest here. [11:57] alterjsive, http://vger.kernel.org/vger-lists.html#linux-raid is the mailing list [11:58] for upstream mdadm [11:58] xnox: I tried raid 5 with mdadm, no luck [11:58] I tried for days, gave up in the end [11:58] alterjsive, mdadm asked chipset, chipset said raid0 only. [11:58] lets maybe debug the current issue. Because that has the /potential/ that it might also affect a tonne of other people with real enterprisey systems [11:59] alterjsive, did you already tell us the specific motherboard / server this is? [11:59] not yet [11:59] is that confidential? [11:59] MSI GS70 gaming laptop [12:00] https://www.msi.com/Laptop/GS70-2QE-Stealth-Pro/Specification [12:01] alterjsive, it is odd, that you are offered actual raid on it. typically the intel rapid storage on those is to do effectively bcache to accelerate hdd with an ssd which on windows side, is confusingly call the same thing, and is typically not raid [12:02] and in bios you should have options to select what type of drivers you want to be used for hard-drives: normal ssd or like raid. [12:02] well, until we have screenshots of the BIOS, we don't know what it is offered/thinks has been configured [12:03] more specific MSI Gaming Notebook GS70 2QE-046NL 17.3", 4710HQ, 1.5TB, GTX970M [12:03] ok I'll reboot 1 sec [12:03] alterjsive: dmicode will say more about what the laptop actually is [12:04] true about dmicode [12:04] dmicode? [12:05] alterjsive: sudo dmicode > dmidecode.txt --> upload [12:05] https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1794318 [12:05] Launchpad bug 1794318 in mdadm (Ubuntu) "unable to boot after installing vagrant" [Undecided,Incomplete] [12:05] brb reboot [12:07] nacc: mwhudson: thanks (delayed, was on pto) :) [12:12] alterjsive: congrats on owning a "Manufacturer: To Be Filled By O.E.M." ;-) [12:13] https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1794318/comments/19 [12:13] Launchpad bug 1794318 in mdadm (Ubuntu) "unable to boot after installing vagrant" [Undecided,Incomplete] [12:13] sladen: haha [12:14] sladen: I recently updated my bios too. after these problems.\ [12:20] https://www.msi.com/Laptop/support/GS70-2QE-Stealth-Pro-4.html#down-bios [12:21] mdeslaur, I think not. I did not hear about any regressions in 9.25. [12:22] so RAID Level: "Recovery (Cont.)", appears to be "Recovery Set" == RAID1 [12:23] yeah seems to work fine [12:23] using the kernel that still boots [12:24] and the newer kernel (newer initramfs + mdadm) [12:24] ? [12:25] alterjsive: and what about these two with sdc and sdd unplugged? [12:26] I showed you the mdadm error yesterday when I boot the latest kernel [12:26] screenshot [12:26] I don't have a screwdrive to umplug the ssd's [12:27] I've got a laptop, not a rack server :) [12:27] I can check tonight [12:27] brb [12:29] alterjsive: yes, the question is whether there is a difference in reporting when non-RAID drives are on the same crontroller. Hopefully not, but that's what debugging is about [12:31] sladen: ok, I will check tonight [12:33] xnox: there's another difference isn't there. the mdadm used in the initramfs on bootup, is a difference mdadm used when alterjsive has run these commands and uploading [12:35] alterjsive: okay, please can you reboot, select "safe mode" for the *working* kernel. then Run mdadm --detail-platform and take a photo [12:36] alterjsive: this will show what the older copy of mdadm is successfully seeing on bootup, rather than the mdadm is on the main root filesystem [12:38] sladen: I will do it tonight, I should get back to work sorry :/ [12:39] maybe in a few hours [12:52] mapreri: ok, I'll let you handle pylint stuff :) [12:59] alterjsive: can you add cat /proc/mdstat to the list too [13:00] using m the safe mode? [13:01] alterjsive: just, just in the running OS [13:01] cat /proc/mdstat [13:01] Personalities : [13:01] unused devices: [13:02] could I have read somewhere that intelraid switched to another raid manager program since the latest kernel? [13:02] I can't remember [13:03] there was a switch from dmraid to mdadm, but ages ago [13:04] ah ok [13:04] never u mind then [13:06] I don't understand the /prod/mdstat . When I used linux mdadm to create a raid array it gave a totally diffrent output [13:12] alterjsive: on *this* machine? [13:13] sladen: yes, I've tried to setup raid 5 after I suddenly couldn't boot my raid 0 system with sda, sdb and sdd. [13:15] jbicha: it's nearly through anyway ^^ [13:16] I've also uploaded a bunch of py3.7-buggy related packages in debian today, guess I'll sync them up tonight… [13:16] https://gist.github.com/samrocketman/9677ca29e0fbaab8f8e55ebc3039172a [13:16] alterjsive: without de-configuring the BIOS/IMSM mirroing? [13:17] sladen: I disabled raid in the bios first ofc [13:18] alterjsive: when was this? A year ago, this week, yesterday? [13:18] sladen: a week ago [13:19] alterjsive: and how old is this RAID pair we're trying to debug? [13:19] alterjsive: was it created in the BIOS less than a week ago, or more than a week ago [13:19] few days old [13:20] alterjsive: after the RAID5 experiment, or before the RAID5 experiment [13:21] after [13:24] doko: cyphermox: didrocks: I got minimal changes a few +1s and nothing else so I'm going to add the proposed changes to the Wiki page [13:24] it iw a wiki after all, so we can change it if someone realized it is bad later on [14:06] cpaelzer: nice ascii art :) [14:14] hehe [14:14] you can even copy and modify it still :-) [14:42] LocutusOfBorg: does virtualbox run alongside libvirt these days? [14:42] who knows? [14:42] :) [14:42] no idea... [14:42] ok [14:42] I know I cant run kvm and vbox together [14:43] because of some race condition on hardware virtualization [14:43] but this seems to be a cpu problem [14:43] right, ok [14:43] you mean run them together right? [14:43] yes [14:44] I think this is an hardware problem, not software, the resource can't be shared [14:44] but I migth be wrong, vt-x is an obscure word for me [14:44] the vbox client can stay, but if I run a VM it gives error (kvm gives it) [14:44] i see there's a vbox hypervisor driver for libvirt nowadays, dunno if it'd help here [14:45] I can run many vbox instances at the same time, not really sure how [14:52] cpaelzer: just reviewed your changes, looks good, thanks :) [14:57] thank you didrocks [16:37] sladen: so you want me to disconnect the hdd's [16:38] sdd and sdc === tsr_dev_ is now known as tsr_dev === slangasek is now known as vorlon === vorlon is now known as slangasek === slangasek is now known as vorlon [23:46] sladen, it is funny, they do claim on the website to support super riad [23:47] Up to 768GB Super RAID 2 + 1TB HDD 7200rpm [23:47] "super raid 2"?? [23:47] do I even want to know? [23:47] https://www.msi.com/Laptop/GS70-2QE-Stealth-Pro/Specification [23:47] sarnold, intel rapid storage raid / imsm stuff -> but appears to be broken with a newer stricter mdadm. [23:48] grep for alterjsive in the scrollback