[07:26] <lordievader> Good morning
[11:12] <alkisg> Hi, does the new subiquity installer leave this entry? /etc/default/grub:GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity"
[11:12] <alkisg> It ends up passing as a parameter to systemd's init, it shows up in ps...
[11:13] <BlueEagle> !context
[11:13] <BlueEagle> That should be a factoid to be sure.
[11:14] <alkisg> Thanks, "I installed ubuntu-server 20.04 and it looks like the installer leaves a bad kernel cmdline in grub.cfg"
[11:15] <alkisg> So my question is if everyone gets this entry, which would make it a bug, or if something in my setup triggered it...
[11:16] <alkisg> If `cat /proc/cmdline` shows "maybe-ubiquity", then it's a bug
[11:17] <BlueEagle> alkisg: I set up my server last month and I did not get that result. I set up on amd64 though.
[11:17] <alkisg> BlueEagle: thank you, what do you mean "amd64", did you use the ubuntu-server.iso, or the ubuntu-desktop.iso?
[11:18] <BlueEagle> alkisg: my immediate thought is to re-install grub and see if that helps.
[11:19] <alkisg> Oh I can easily fix it by just updating /etc/default/grub; the bug would be in the new subiquity installer, and I'm asking for confirmation in order to report it to launchpad...
[11:19] <BlueEagle> alkisg: I used ubuntu-20.04.1-live-server-amd64.iso
[11:19] <alkisg> Hmm, I think I used that exact same one :/
[11:21] <BlueEagle> alkisg: I installed it  beginning of november, and did not see your result. Not sure of which version of subiquity was used, but I did update it during the install. However if you installed today then a newer version of subiquity could have been used.
[11:25] <alkisg> I installed a week ago
[11:25] <alkisg> I did the "update installer" step
[11:33] <Skyrider> Greetings
[11:33] <Skyrider> I have a quick question. I'm using this to enforce a key file upon login: https://pastebin.ubuntu.com/p/rqYyht4YvN/
[11:34] <Skyrider> SFTP connection works perfectly, no issues. But when I connect to the terminal, it says `This service allows sftp connections only.` - While SFTP is set.
[11:34] <Skyrider> At least according to the MobaXterm protocol.
[11:35] <Skyrider> Noticed that when I use putty/Kitty, connection instantly closes after I put in my key password
[11:37] <Skyrider> Is it because of "ForceCommand internal-sftp"?
[11:38] <alkisg> I believe "ForceCommand internal-sftp" would allow SSHFS but prohibit SSH connections, yeah
[11:46] <Skyrider> Aha. That's good to know, thank you!@
[13:48] <DammitJim> is there a way to easily break up a software raid1?
[13:49] <sdeziel> DammitJim: yup: pull one of the drives ;)
[13:49] <DammitJim> just like that?
[13:50] <sdeziel> DammitJim: it does what you asked for as it really breaks the mirror because only the remaining disk will be kept current
[13:50] <sdeziel> but maybe you wanted to move away from RAID1 to a non RAID-ed setup?
[13:51] <BlueEagle> or to a raid5?
[14:12] <DammitJim> sdeziel, and BlueEagle you guys are smart guys
[14:12] <DammitJim> yes, I do want to move away
[14:12] <DammitJim> this was more of an experiment and next thing you know this system is very critical for testing systems
[14:13] <sdeziel> DammitJim: I'm not sure it's an easy procedure to move away from mdadm altogether. That said, RAID1 sounds like a good thing for a critical system
[14:13] <DammitJim> but it's so huge, I want to not have so much storage being used because it's causing me backup problems
[14:13] <DammitJim> anyways...
[14:14] <sdeziel> DammitJim: RAID1 doesn't give you more usable space
[14:14] <DammitJim> it's all in a SAN and we have a twin system
[14:14] <DammitJim> sdeziel, correct, it doesn't give me more usable space, it uses double the amount of drives and storage
[14:14] <sdeziel> indeed
[14:15] <DammitJim> so, instead of having to allocate 10TB for this VM, I jjust want it to allocate 5TB without RAID1
[14:15] <sdeziel> I'm sorry, I cannot provide useful tips on how to cleanly move away from RAID1
[14:15] <DammitJim> thanks
[14:15] <DammitJim> but you have helped me verify some of the things I had thought about
[14:15] <sdeziel> good
[14:33] <shubjero> coreycb: just scrolled back and read your response. Thanks!
[18:26] <DArqueBishop> So, I'm having an odd issue and am hoping someone could help me. I'm hoping this is as simple as "Bishop needs caffeine because he missed something important".
[18:27] <teward> you need to ask a real question with details of what you are experiencing.
[18:27] <teward> !ask
[18:27] <ubot3> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
[18:28] <DArqueBishop> Trust me, I am asking. Stand by. :-)
[18:29] <DArqueBishop> I'm trying to launch MariaDB on Ubuntu 20.04, but it keeps failing with the error, "Could not increase the number of max_open_files to more than 16384". I've tried adding /etc/systemd/system/mariadb.service.d/override.conf with LimitNOFILE=100000, but that didn't help.
[18:29] <DArqueBishop> Neither did changing that line in /usr/lib/systemd/system/mariadb.service. I've run systemctl daemon-reload each time I made a change.
[18:30] <DArqueBishop> I've also added "mysql soft nofile 100000" and "mysql hard nofile 100000" to /etc/security/limits.conf.
[18:30] <DArqueBishop> Did I miss something?
[20:44] <tomreyn> DArqueBishop: is this in a VM? a container? is this a standard 20.04 installation? done how? upgraded from earlier releases? where is the mariadb server from, installed how?
[20:45] <tomreyn> (i'll be afk for a while, but those questions may help to identify the source of the problem)
[20:45] <sarnold> hah, tomreyn beat me to it, I was in another terminal checking something, but thought "I wonder if this is a container" when I was returning here to type it up :)
[20:46] <sarnold> DArqueBishop: also check cat /proc/sys/fs/{nr_open,file-max}
[20:54] <sdeziel> Ussat: teward: the site launch was delayed from Dec 7th to today so we had a little more time to prepare but it went relatively well: https://sdeziel.info/pub/site-requests.png
[20:54] <sdeziel> we have not served a single 4XX or 5XX ;)

[20:56] <teward> how were the response times thoug h:p
[20:56] <sdeziel> excellent all around
[20:58] <sdeziel> 2,447,126 requests were served from the CF cache, only 1,726 reached our origin ;)
[20:59] <sarnold> sdeziel: sweet :)
[20:59] <sarnold> damn
[20:59] <sarnold> good job
[20:59] <sdeziel> sarnold: thanks :)
[21:00] <sdeziel> I can blame Facebook for most of those 1.7k requests that busted our caches due to Facebook adding the query string "?fbclid=$tracking_code"
[21:01] <sarnold> someone oughtta break them up...
[21:22] <Odd_Bloke> sarnold: "?fbc=$tracking&lid=$_code" ?
[21:23] <sarnold> Odd_Bloke: facebook :D
[21:41] <Odd_Bloke> ^_^
[21:42] <sarnold> :D
[21:46] <DArqueBishop> tomreyn: all fair questions. :-) It's a standard 20.04 built from the server ISO yesterday, running on VMware Player 16.1.0. It was a fresh install, and MariaDB was installed from the Ubuntu repos.
[21:47] <DArqueBishop> The database itself was restored from a Percona xtrabackup backup from a box running MariaDB 10.1 (?) via the IUS repo on CentOS 7.
[21:49] <DArqueBishop> sarnold: nr_open is 1048576, file-max is 9223372036854775807.
[21:50] <sarnold> hmm, okay..
[21:51] <DArqueBishop> Docker is installed, but I don't have any actual containers running (yet). :-)
[21:54] <sarnold> DArqueBishop: is there anything in dmesg?
[22:03] <DArqueBishop> sarnold: I'm afraid not.