=== cpaelzer_ is now known as cpaelzer [14:32] Does the standard Ubuntu kernel from e.g. Hirsute support most modern CPU flags out of the box or does one have to compile their own kernel to get that support? [14:32] Was referred here from #ubuntu [16:35] locsmif (N,BFTL), yes we would expect the broadest support that we can sensibly achieve in a kernel. the reason CPUs have the flags it to let s/w react. [17:27] Hi all. I was here earlier but had to leave... I wanted to ask.. For Ubuntu Hirsute, could I expect any performance improvements if I got a kernel built for e.g. a Xeon E5-2680 v2? [17:27] Or any similar Xeon CPU rather [17:33] locsmif: < apw> locsmif (N,BFTL), yes we would expect the broadest support that we can sensibly achieve in a kernel. the reason CPUs have the flags it to let s/w react. [17:49] locsmif: that CPU is ancient, any currently supported ubuntu kernel will squeeze everything it can from that old cpu.... [17:50] but even newer series the lag before the cpus are enabled is at most a couple of months. As we start integrating support for newer CPUs ahead of public release of hardware generally. [21:14] ~/quit [22:13] sarnold:/apw and xnox: so I don't need to compile a special kernel for that CPU? Because my co-worker said Ubuntu kernels wouldn't be able to get the maximum out of flags made available by such a CPU compared to, say, the KVM64 "CPU" provided by Proxmox [22:14] Which lacks quite anumber of flags [22:15] Because those kernels "would have to be compatible with every CPU" so they couldn't have built-in support for those flags [22:15] I tried to convince him otherwise (I said kernel process scheduling might benefit from them) but he couldn't be budged [22:16] It's about CPU host pass-through vs QEMU CPU emulation and whether it matters for performance to let the VM know about the host CPU it is running on at all [22:16] I suspect it matters, slightly [22:21] locsmif: probably telling libvirt to passthrough the 'raw' cpu rather than just using one of the predefined models would have performance benefits, but at the cost of reducing when you can migrate VMs from machine to machine [22:21] But, I'm also prepared to compile my own Ubuntu Hirsute or Debian kernel if I gain any kind of statistically significant advantage [22:22] sarnold: ah, yeah, we talked about this in #proxmox maybe? Yeah we wouldn't do a live migration, we're a webshop where speed is very important as a selling point but live migration is not [22:24] We can pick a time block with low conversion/a lacuna in web visitors and schedule a migration then if we wanted one [22:24] heh, that probably means my other idea is a *horrible* one.. :) [22:24] Heh ok [22:24] No worries [22:24] (the *horrible* idea is to boot with mitigations=off, which disables a ton of security mitigations .. it might still have its uses for you, but shared hosting provider is probably not an ideal place for it :) [22:25] Oh no, it's not a shared hosting provider at all, we have two physical machines which are a Proxmox cluster, it's dedicated [22:25] A bit old machinery maybe, but ok [22:27] It's not my call, but the boss has money, so I don't quite get why there isn't a budget for 3 machines with more modern hardware, but ok [22:28] But we're competing, at least I certainly am ;-) with the old setup and I'm trying to squeeze every ms I can out of this setup [22:28] four VMs each on every machine: load balancer, web hoster, database vm and redis [22:29] We're running MySQL 8.0 and InnoDB cluster and that is where performance matters most: read queries [22:29] I guess I might be able to convince the head honcho to at least add a third at some point [22:30] First, though, I have to convince the senior administrator to enable host CPU passthrough, but I should only do so if it yields results