[01:58] is there anything wrong with a process that just sleeps instead of cron handling it? It is being managed by supervisord so it should handle failures, but is there any reason why this shouldn't be done? [01:59] it's perfectly fine [04:02] jayjo: it should be fine as long as it doesn't leak memory, I guess... [04:05] or suffers from significant memory fragmentation [05:05] anyone know how to stop ubuntu server from sleeping when a laptop lid gets closed? === crimastergogo_ is now known as crimastergogo [07:38] unshackled: Is X running? [08:48] part #ubuntu-server [16:19] ugh FINALLY I can get back to work on nginx >.> [16:20] (upgraded my laptop from 16.04 -> 18.04 in place, which exploded, thankfully I had a full disk image of 16.04 so I could blast away the failed upgrade with clean 18.04 and restore data...) [16:20] (three days later and I finally have things working again >.<) [21:58] hi, i'm running ubuntu-server 14.04.5 on an UEFI system, with LVM on (dm-)RAID1. Now I want to install 18.04.1 besides the working installation (i.e. on another, dedicated LV), and switch to the new ubuntu server installation when everything's working fine. Now, my questions are a) is GRUB able to boot from a logical volume stacked on top of a dm-raid, without an extra boot partition? (Currently, my root is outside the LVM, just on RAID) [21:59] b) is it possible to put the EFI partition into a RAID too, without disturbing the UEFI? [22:01] and c) (or maybe it's belonging to b, too) can the UEFI firmware probably handle multiple EFI partitions on different hard disks? [22:03] panne: dm-raid (proprietary hardware / fakeraid) or mdraid (software raid / intel fakeraid)? [22:05] does the system currenlty boot in uefi mode? or CSM/legacy bios? [22:06] ESP must remain a partition on a partitoned disk, intermediary layer such as software raid and lvm wont work since the firmware doesn't understnd those. [22:07] tomreyn: oh, uh... how do i tell? i think it's software raid (it's administrated via mdadm!), so probably mdraid? But the devices are named dm-##, for that i thought it would be dm-raid... [22:07] you can have multiple ESPs on different disks, but the firmware will only use the first one it comes across 8but at least this allows for a manual failover scenario). [22:08] booting in uefi mode - don't think, the box has CSM at all (it's IBM System X server) [22:08] what you manage with mdadm is md raid, not dm-raid [22:08] ah, ok [22:10] you can have /boot on md raid-1, / on lvm2 on top of md raid-1. [22:10] i *think* you can also have /boot on lvm2 on top of md raid-1 but it can get finnicky. [22:12] dm is device mapper, i think? does md use dm? (because, the raid and lv devices have all symlinks in /dev/mapper/ ...) [22:15] i'm a little confused... ^^ [22:15] tomreyn: do you have any hints where i could read more about grub, boot/root, lvm and mdraid interaction? [22:17] I'm reading through the grub mailing lists since almost 2 days, to find similar setups/problems, but ... not really helpful [22:18] (almost more confusing the more i read .) [22:19] lvm uses device mapper and dmcrypt-luks does, but i think md does not. [22:20] panne: arch wiki is often a good resource for reading up on pitfalls of custom configurations, ubuntu's wiki covers the scenarios the installers support, and sometimes more than those. [22:25] tomreyn: well, you're right - just looked again: the root fs is named md0, not dm-0; aand it's not in /dev/mapper/... Arch wiki, yes! I will have a look, thank you very much! [22:25] you're welcome, good luck. [22:27] panne: if you're unsure what layers you have right now, lsblk can usually point you in thw right direction. [23:15] Hi there. Anybody available to help me repair a server booting problem? [23:30] Anybody here at all? [23:44] sevynos: yes, with half an eye... ;) [23:45] lol, so I hope you don't need glasses on that half eye ;) [23:45] sevynos: what's your problem? [23:45] Are you good at boot and kernel issues? [23:47] well... ATM i've some questions about booting things myself, but tell your problem - if i'm not able to help, maybe someone other is [23:48] (and my problems are somewhat special, i do think .) [23:48] i.e. don't ask to ask on irc. [23:49] I had a boot partition full issue. I tried to do a autoremove but it always failed because it had no room to work (so I guess). So I manually removed unused kernels. After system was unable to boot. I used Boot repair with a live cd and it all messed up the system. [23:51] sevynos: with "manually removed", do you mean "rm" or "apt remove/purge"? [23:51] with dpkg [23:52] 'boot repair' (not a supported utility here AFAIK) tends to do fail occasionally. chances are you now got an unbootable system and need to chroot from a live system to revive it. [23:54] That's what I want to do but i'm no expert at linux so I tried to find the procedure to follow without sucess [23:54] before you do this you should probably try all kernel versions listed on grub's "advanced" submenu though, may still work and save you time. [23:55] boot repair messed up my grub. Now I only have two entries about EFI systems and none of them work [23:58] https://askubuntu.com/questions/28099/how-to-restore-a-system-after-accidentally-removing-all-kernels [23:59] tomreyn: Thanks, I will look at that.