[14:51] I'm not sure if this is the right place or if I should just write up an issue on the GitHub issue, but I've been having issues with multipass after migrating from hyperkit to qemu on an Intel macbook. qemu-system-x86_64 is consistently over 100% CPU, sometimes spiking to over 400%. [15:18] Hi someguy! If you `multipass shell` into the instance and do `top`, do you see if any process is taking lots of CPU time? [15:31] Looks like `kubelite` is the culprit in terms of %CPU. Looking at CPU time, I'm seeing java, containerd, sshd, calico-node, though they keep bouncing around quite a bit. [15:31] We're using microk8s to setup a kubernetes environment for local development [15:33] someguy21: Ok, so it seems it's not related to the Hyperkit->QEMU migration then. [15:43] Well, I didn't start having this issue until I switched over at least. Historically, those of us on Intel macs were having numerous issues unless we used hyperkit. Not as much of an issue on Apple silicon. I couldn't tell you why but that's what we observed. [15:43] But a few weeks ago, hyperkit just stopped working for us. We'd try to start the multipass VM and would get errors about instances stopping while starting, and stuff like "Operation not permitted: block device will not support TRIM/DISCARD" in the logs. I dug into the issues in multipass's github repo and saw that basically we just need to switch [15:43] over to qemu since hyperkit support is being deprecated anyways. [15:43] After re-installing multipass and specifying qemu as the driver, the VM finally stopped crashing on me. But now I'm seeing these awful performance issues and CPU spikes. [15:53] The Hyperkit issue you were experiencing before was due to the renaming of the initrd and kernel by the folks that produce the cloud images and we were unaware of those changes and as such, we could not download these files and so the instance would fail to boot. The easy answer was to have folks switch to `qemu`. [15:53] I'm really surprised that performance was "good" when using Hyperkit as an instance that is internally busy is going to make the host busy as well. I have not really seen any sort of performance issues when using QEMU on Intel Mac nor have we really gotten any reports of this until now. Do you know if the Hyperkit based instances also showing as busy via `top` as well? Underneath the hood, both Hyperkit and QEMU use the [15:53] Hypervisor.framwork API provided by Apple. [15:57] Also, are these instances that were previously migrated from a Hyperkit instance or are these freshly created instances using QEMU? [16:10] Well, I wouldn't say it was "good" using hyperkit either, but it was usable. [16:10] And good point, perhaps saying "migrating" was misleading/wrong. These are freshly created instances using QEMU. I have removed any hyperkit based instances. [17:56] someguy: I'm really surprised that the hyperkit instances seemed to cause the host to perform relatively better than qemu instances. I assume you are provisioning the instances the same way, ie, same number of cores, memory, etc? [17:57] * better than with qemu instances. [18:18] Almost the same way. 6 core, 6 GB memory, 40 GB disk, and a cloud-config. I just peeked and the only difference is that we provide the `--network` option to qemu which is `route get default | grep interface | awk '{print $2}'`. I wouldn't think that would be related though [18:27] someguy: Ok, I'm really not sure why the discrepancy in the performance of the host depending on the driver. If the instance is busy on multiple cores it's provided, the qemu threads that service those virtual cores will be busy as well. I have seen the same behavior with hyperkit as well. Unfortunately, I think without some way of profiling your host comparing hyperkit with qemu using the same instance setup, I'm not sure what else [18:27] we can do. [18:42] Gotcha. Yeah at this point I'm starting to wonder if it's our hardware, our Intel macbook pros are a few years old at this point. I'm due for an upgrade by the end of the year I think, but by that point we'll be using a different solution for our local development environment (although I'm sure that will come with its own slate of issues). [18:42] Any tips/suggestions, regardless of qemu/hyperkit, that could improve performance? Like some additional options or configuration I could tweak or specify?