[01:55] <mattwynne> I broke my server by cutting the connection half way through an `apt-get upgrade` :(
[01:56] <mattwynne> I am trying to repair it, but it seems pretty bad.
[01:58] <mattwynne> I'm trying to just reinstall the OS using the server ISO, but I can't work out how to use the manual filesystem setup. I have a RAID array (RAID 1) for both `/` and `/boot` mount points.
[01:58] <mattwynne> I can't seem to tell the installer about this setup
[01:59] <mattwynne> on the desktop live CD I can install `mdadm` and run `mdadm --assemble --scan` and everything is as it should be (except the broken OS)
[02:00] <mattwynne> but doing this by hand in the server setup screens is beyond me right now
[02:00] <mattwynne> Is there a guide you can point me to?
[02:08] <oerheks> apt install -f or sudo dpkg --configure -a
[02:15] <mattwynne> ooh thanks oerheks would I use that from the desktop live CD?
[02:16] <oerheks> one can boot a server in recovery mode, and perform this task?
[02:17] <mattwynne> The problem I'm facing right now is that the boot is not working.
[02:18] <mattwynne> I tried to use `update-grub` from within a chroot (as in https://help.ubuntu.com/community/Grub2/Installing) but there was nothing in /etc/grub.d for it to create a /boot/grub/grub.cfg from
[02:18] <mattwynne> I found a `/etc/grub.d.bak` and tried that but it only has a memtest in it
[02:44] <tomreyn> mattwynne: did you manage the manual partitioning, yet? also, which server installer release are you using exactly?
[02:44] <tomreyn> generally, the 18.04.3 installer should be able to do what you described
[02:45] <tomreyn> also are you uefi or bios booting?
[09:28] <mattwynne> tomreyn no, no luck yet. I gave up and went to bed
[09:29] <mattwynne> I have been trying with the ubuntu-18.04.3-live-server-amd64.iso image
[09:30] <mattwynne> I am going try booting into the Desktop live CD again today, setting up the chroot to my `/` and see if I can get `api-get -f` to fix things
[09:30] <mattwynne> Do you think that's feasible?
[09:30] <mattwynne> Or is there a guide for using the manual partitioning page on the server install CD? It's pretty hard to work out how to use it.
[09:30] <mattwynne> (for me anyway!)
[16:08] <weedmic> trying to do "balooctl purge", but it is not a listed option - what is the correct option?
[16:08] <weedmic> I want to erase the index so it builds anew
[16:34] <weedmic> perhaps it was truncated and now by disabling and enabling it creates a new index?
[16:41] <tomreyn> desktop -> #ubuntu (or your flavour, this one would be #kubuntu possibly), please (still).
[18:41] <vlm> is there a way to limit the amount of ssh instances that run?I seem to have additional instances running which i cant account for
[18:43] <lordievader> Client or server instances?
[18:43] <vlm> ohh i ment server instances
[18:45] <lordievader> That is odd. A `ps faux` may tell you who/what started the sshd.
[18:46] <vlm> |       |   \_ /usr/sbin/sshd -D -e
[18:48] <vlm> normally when ive shutdown the sshd service i loose connectivity,this time however i didnt
[18:49] <lordievader> Well, I meant to look at it's parent processes. It should normally be systemd (or some other init system).
[18:52] <vlm> ...its was a container process
[18:52] <vlm> thanks for help
[18:55] <lordievader> Ah, that makes sense that you see those as the hypervisor. Those should run if you want to ssh to those containers.
[18:56] <vlm> ugh other than it wasent it seems,i have a container running,but that shouldnt allow me to stay connected while shutting down the sshd service i think
[18:57] <vlm> on the host of the container that is