[13:09] I want to upload a shell script file into the running VM on qemu. Is there any way other than using ssh access? libvirtguestfs was a good method but it requires turning VM off. I'm using VMware tools on the VMware infrastructure which gives me an option to login into the VM by user and password. What can I do on qemu virtualization? [13:58] punkgeek: spice offers some kind of file sharing or transfer, i think [13:59] or you could mount another storage (with just the script in it) to the VM, or set up some other network based disk access on the vm (nfs, samba, nbd, ...) [14:33] tomreyn: Thank you but I want to run it after uploading [16:24] what's wrong with using SSH and/or SPICE? [16:45] JanC: the vm doesn't have network [16:45] so what _does_ it have enabled? [16:48] how do you interact with it normally? [18:27] normally if there is no network, I'll just use paste into the vm console [18:47] Is there any point in using ESXi on a home server now that there's Docker/LXC/containers? If all you're running is Linux systems, then VMs are just extra overhead for nothing, right? [19:26] depends [19:28] patdk-lap: lay it on me [19:29] I've used ESXi before, before I learned Docker. Last week I started using Docker and it seems way superior, if only because of very low RAM usage, and shared storage filesystem. [19:54] I use esxi + docker + k8s + lxc [19:54] basically gave up on lxc and moved it to docker, lxc is much more vm like than docker [19:55] but some things just don' twork in docker, or is just way too annoying to bother in docker [19:55] I do run esxi clusters and k8s clusters side by side [20:03] bearing in mind I'm a home user just running random independent self-hosted apps and hoping to minimize required hardware. I dont need "corporate" features (and ESXi doesnt really have those either) [20:04] what doesnt work in Docker? [20:04] I know snaps dont [20:10] for me, just doing network management has been a huge pain to do in docker [20:23] the main difference between virtualization and containers would be the strength of isolation, and whether you can run cross-platform code. [20:24] tomreyn: right. So for someone who's only running Linux services, and who considers Docker "isolated enough", it's a no-brainer, Docker wins right? [20:26] if you can live with its network + storage management, and the design which is more targetted at development rather than long term operation, yes. [20:27] I mean I have a Ubuntu 20.04 container I'm using to run my backup jobs on my NAS, and it's just using 20MB of RAM and a tiny amount of disk space. [20:27] or lxc/lxd, which is the better design, just ubuntu sadly made it a snap, which makes it pretty useless. [20:27] why isn't it good for long term operation? [20:27] eh, I already learned Docker, dont feel like learning something else [20:27] (My NAS only supports Docker anyway) [20:29] i'd say don't run complex / live services where your backups are (which is often on a NAS) [20:30] i didn't say docker isn't good for long-term operation, just saying it's primarily designed for the use case where you create minimalistic images, deploy them, run them for a while, then replace them by a new image. you can do long term operation that way, as long as you keep building new images. [20:31] you can also break out of this original design and just run those guests long-term, it's just not how it's meant to be used. [20:33] yeah I get that. I'm using --volume to have the data dir be on the host filesystem. As long as the updated image doesnt break anything (and because I'm not using "latest" tag, it shouldnt), I'll be OK [20:33] "long term operation" is probably not the best way to describe it. i was comparing to the general home system which you install at some point, then just keep installed and run release upgrades on over the years (at least that's a common pattern) [20:35] so from what I'm picking up here, a power user who knows they're using Linux ,and who will use --volume mappings to keep data outside the container, Docker is a superior replacement to ESXi due to far smaller resource usage, and simpler management [20:35] *who knows they're using Linux exclusively [20:36] for me, setting up and doing autoupdates in a vm is *simpler* mostly [20:36] backing that up might be more complex for a home user [20:36] docker is simple to backup, but updating images can be a pain [20:36] if the source you use updates good nice, and just have to watch for breaking changes [20:36] then it's just the images you build yourself [20:37] why would updating the OS/image be more of a challenge on Docker than a VM? [20:38] well letys just say there is something called debian/ubuntu that already made that auotupdate work and they test the updates [20:38] docker images, not so much [20:38] some do, and they are good, most are kindof crap [20:38] and the more specific you get your docker image, the more crap you tend to run into [20:39] but they're the ones publishing the docker images. eg https://hub.docker.com/layers/ubuntu/library/ubuntu/focal/images/sha256-3555f4996aea6be945ae1532fa377c88f4b3b9e6d93531f47af5d78a7d5e3761?context=explore [20:39] yes, and that is a base [20:39] but that doesn't do anything in itself [20:39] you would have to build a docker image that installs apache or whatever it is you want to do [20:39] now it's not so much autoupdate anymore [20:39] right, the Dockerfile. I wrote that one myself [20:40] I see what you mean [20:40] it's not a huge deal, but your going have to update and re publish it and move over to it [20:40] I'm good until 2025 though. [20:41] ya, but it is nicer to use someone elses apache made containers [20:41] like say, use nginx, and not worry about it, and let it update itself in docker [20:41] you just have to watch for breaking changes, and if they continue to update it [20:41] nginx your not likely to have an issue [20:41] OK, I get what you mean [20:42] but like I said, the more specific you get, the more issue, like nextcloud [20:42] oh? tell me about Nextcloud. I was planning to run that at home soon. [20:42] I put it on a manual pile [20:43] it's always having conflicts when going between major versions, kindof expected, and sql issues you need to resolve manually [20:43] but they made that upgrade process really manually attended :( [20:43] it can work ok most of the time, but you really need to check it each time and cannot just leave it to upgrade at will [20:44] that makes sense though. Even if they claimed upgrades are fully tested I'd still never run a "latest" image of ANYTHING that saved data. [20:44] cause you're one developer bug away from losing stuff. I know, I know, backups. [20:48] do you advise a casual user with some Linux experience, but zero experience in hosting stuff on the public internet, in running a Nextcloud on the Internet? Can I just stay on their major Docker image (v22 right now), recreate it periodically for updates, and update the image when v23 is out, and not get hacked? [20:48] only the Nextcloud port (443) would be open on the firewall [20:49] well, install otp and set it up for your accounts [20:50] 2fa? I'm not worried about being brute-forced, I'd pick a complex password created by my password manager. It's more about flaws in Nextcloud or the webserver software. I've never used any of that. [20:50] And I presume Nextcloud has some sort of "ban IP on too many login failures" setting [20:50] it's easy to protect from brute force [20:50] that isn't what 2fa is protecting against [20:51] well, there is no way around flaws [20:51] the idea about flaws is to use something popular enough that flaws are found quickly [20:51] or if it isn't and is attacked, someone else will report the attack and it will be fixed before you are attacked [20:52] or can apply protection for that attack [20:52] so nothing wrong with hosting Nextcloud on the internet despite lack of experience [20:52] as long as I make sure the container image is updated periodically [20:52] na, normally experience just has to do with how quickly you respond to issues [20:52] (to pull the latest v22 with the patches) [20:53] tbh I was hoping to simply let it run in autopilot and never look at it again :/ [20:53] I put a docker-notify thing so when a new image is released I'm notified and can check the release changlog [20:53] for things that don't have their own security mailing list [20:54] nextcloud does have a build in email security thing if you add yourself to the admin group [20:54] nice, I'll save this [20:54] love docker-notify [20:54] mostly cause cloudflared doesn't have any release notifications :( [20:57] I set up Radarr, Sonarr, Calibre-web, etc at home with Docker. honestly I love how convenient Docker made this, it's become AppImage for Linux backend daemons. [20:58] better than AppImage, since AppImage is still contending with Flatpak and Snap [20:58] while Docker images are supported in most container systems (correct me if I'm wrong) [20:59] well, a docker image is just a filesystem [21:00] just a collection of tar.gz files in basic terms [21:00] and some metadata added [21:01] imagine if desktop apps had a similar "ship all dependencies" mentality. I think Debian (and therefore Ubuntu) is still shackled by restraints from 20 years ago, by forcing the split of dynamic libraries. [21:01] no, that would be insane [21:02] dynamic libraries allows you to contain security to limited set of things [21:02] if everything installed whatever version of everything it wanted [21:02] it would be insane to patch [21:03] like right now, if openssl has an issue [21:03] and yet , we ended up either running old obsolete software, or AppImage and Snap and Flatpak which have the same issue, but are fragmenting the choice of a single format [21:03] it's simple to patch that [21:03] in docker, you have to wait for every docker image to update openssl then download those images and run them [21:04] snap resolves it, by keeping the dependencies as seperate snaps [21:04] I don't know anything about the others and I'm not fond of snap at all [21:04] so snap has eg Qt 5.1, 5.2, ... 5.15, 6.0, all as separate snaps that remain in the snap database? [21:04] yes, unless you remove them, and as long as some other snap it using it [21:05] so there's a list of base dependency snaps? Who maintains those, Canonical? [21:05] whoever made that snap [21:06] canonical made many [21:06] changing lxc to only be a snap is what finally killed lxc for me [21:06] I didnt actually know this, I havent used Snap. the one time I needed it, when I ran the app, it didnt work (required a webservice) and the dev said the snap was too old. I just find the odd AppImage sometimes online. [21:07] (to be clear the webservice was the website the snap app was a client for, not Ubuntu's fault) [21:07] AppImage is "here's everything, it WILL work" [21:10] About OpenOTP which you recommended earlier...the latest Nextcloud version posted here is 15: https://apps.nextcloud.com/apps/twofactor_rcdevsopenotp/releases Nextcloud is at v22 now [21:11] will it still work? Do you use it? [21:11] dunno about openotp [21:12] ok, nm, there's many more 2FA apps on Nextcloud under Security. I just mentioned OpenOTP cause you did. [21:13] Two-Factor TOTP and Two_Factor U2F [21:13] I mentioned OTP [21:13] I didn't specify the type of otp [21:14] my bad, you did [21:14] just use my yubikey in u2f mode [21:14] and totp for if I don'y have my yubi on me [21:16] Thanks for all the advice. I'm looking forward to setting up Nextcloud...even though I have a feeling I wont use it much [21:17] I have all my backups get copied offsite using it [21:17] and phones all get backedup to it [21:17] via the Nextcloud mobile apps? [21:17] can I excute system command such as virt-customize through python libvirt conenction? conn = libvirt.openAuth('qemu+ssh://root@ip', auth, 0) [21:18] yes [21:18] also desktop [21:18] my wife gets her pictures synced from phone to her desktop and backup in nextcloud now, she happy [21:18] icloud won't sync it to the desktop === amurray_ is now known as amurray