[05:29] bryyce: thanks, one more resolved \o/ [05:29] name mapping error ... too early, thanks sergiodj :-) [07:39] Is there a way to use cloud-init with a custom configuration without rebuilding the installation image? Can I somehow supply a url with the cloud-init configuration in the kernel command? (probably doing this through the installer's grub menu) [10:56] ExeciN: you can use kernel command line autoinstall ds=nocloud-net;s=http://mywebserver.mytld:myport/ and have the web server serve your cloud-config at the http://mywebserver.mytld:myport/user-data (and an empty /meta-data). note i'm just another user, not a developer. [12:27] tomreyn | ExeciN: you can use kernel command line autoinstall ds=nocloud-net;s=http://mywebserver.mytld:myport/ and have the web server serve your cloud-config at the http://mywebserver.mytld:myport/user-data (and an empty /meta-data). note i'm just another user, not a developer. [12:28] since you were disconnected when he wrote this :P === schopin_ is now known as schopin [15:09] thanks tomreyn and blackroot [20:12] If I want to run a single LXC Container, on a single host, should I even bother with "LXD"? [20:13] as I understand it, lxd lets you do image-based container things; lxc lets you do recpie-based container things; I think I'd pick whichever one best matches your preferred working style [20:14] the quick answer is yes. its just more convenient [20:15] on my Pi4 i actually use LXC. it works too of course [20:15] but on my ubuntu desktop i run lxd [20:16] Every time I want to do LXC/LXD I'm met with this issue being so unclear [20:16] And most stuff on the web is a very hit & miss in regards to instructions on both [20:16] It really mixes them up together [20:16] and what issue is that? [20:16] If you google for "do stuff with lxc" [20:17] you'll get instructions with the "lxc" command, but that actually is part of the lxd package [20:17] "lxd init" asks some basic questions and then you can start containers [20:17] after you setup lxd almost all commands start with lxc [20:17] https://linuxcontainers.org/lxd/try-it/ [20:18] znf: heh, yeah, the 'lxc' command to use lxd is *very* frustrating imho :( [20:18] also, what about the lxc packages [20:19] do I need to install any package separately, with apt, or should I just snap install lxd [20:19] the lxd snap is all your need [20:19] *you [20:19] now, wonder if I can actually run lxd 5 with a 18.04 kernel [20:20] i guess you can run the stable version with 18.04 [20:20] never tried it. i started using it on 20.04 [20:21] Name of the storage backend to use (zfs, btrfs, ceph, cephobject, dir, lvm) [default=zfs]: [20:21] why the hell does this even default to zfs lol [20:21] it creates an image storage with zfs i think [20:22] to enable snapshots and so on [20:22] and what does it actually use? [20:22] ooooh [20:22] it does a loop device [20:22] that's crazy [20:23] i actually have a zfs pool on my system [20:23] but the loop device works just fine [20:25] znf: yes, no problem in using LXD 5.0 on 18.04 with whatever kernel [20:25] oh, wtf [20:25] lxc console attaches to the login console [20:26] what's the equivalent of lxc-attach -n [20:26] znf: you probably want `lxc shell foo` to give you a bash login [20:26] znf, lxc exec yourcontainer -- bash [20:26] ah, 'shell' is fine [20:26] shell exists too [20:26] ^^ [20:26] both are very similar [20:26] it doesn't list "shell" when you -h [20:27] znf: that's because `shell` is in fact an alias [20:27] I see [20:27] znf: it invokes `su -l` inside the instance [20:27] great! [20:28] i dont know how big that loop device is by default [20:28] I went with 'dir' [20:28] ok [20:28] I won't need snapshots on this [20:28] i love my snapshots :D [20:28] I have LVM but someone assigned all the space to / already [20:29] znf: https://linuxcontainers.org/lxd/docs/master/reference/storage_drivers/ [20:29] that table should give you a nice overview of what LXD supports in terms of storage [20:30] znf: another selling point in favor of LXD is that is supports VMs too ;) [20:30] s/that is/that it/ [20:30] i never tried that before [20:30] i really should [20:30] I'm familiar with them, but I feel dirty about using a loop device [20:30] but i usually only run them on my desktop and virt-manager does a good job atm [20:30] and because it's just 1 single container, I'm meh about worrying about them [20:31] now, where's that fancy volume mapping [20:31] znf: yeah, ideally you'd carve up some dedicated space but the loop device actually perform quite nicely [20:32] ravage: my main grip with virt-manager is the XML part [20:32] if *someone* didn't allocate all VG space to a single LV... >_< [20:32] sdeziel, yes XML is always annoying :) [20:32] k, now, how do I map the host /home/stuff to the guest /home/stuff ? [20:32] znf: ext4 supports shrinking if you use a liveusb [20:32] I don't have IPMI access on it [20:33] and it's located far far away [20:33] too much of a hassle [20:33] :) [20:33] granted [20:33] lxc config device add c1 sharedwww disk source=/wwwdata/ path=/var/www/html/ [20:33] that much? [20:33] znf: sounds about right [20:34] ok. i know im just lazy. but how do i fix "LXD VM agent isn't currently running" ? :D [20:34] for a vm? Wait a bit [20:34] ravage: just wait till the VM finishes booting is usually what you need to do [20:34] oh ok. i was just too fast then :D [20:34] yep. thx [20:34] it's indeed slower than non-vms [20:35] and most of the time ssh is ready before the agent, so you can already ssh in if you have the creds in place [20:35] im usually fine with containers. but its nice to have the VM option [20:35] ravage: you can always do `lxc launch --console=vga --vm ...` [20:35] in case of some kernel related thing a VM is good to have [20:37] it's indeed a quick and very convenient way to launch one [20:37] not just ubuntu [20:37] basically any os out there (linux based) [20:38] just use the "images:" remote [20:38] yeah, for other OSes, you have to provide the ISO usually as we cannot distribute ready-made Windows VMs :/ [20:39] just take a look at all that is available in "lxc image list images:" [20:39] maybe that will finally change now that MS was forced to do the licensing-per-core modification [20:39] centos, alpine, fedora, gentoo (!), opensuse, etc [20:39] some I have never heard of even [20:40] what's the tcp proxy thingie called [20:40] so I can proxy ports on the host to the container [20:41] lxc config device add mycontainer myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 [20:41] this sounds ok [20:42] You only need that if you need external Access [20:42] hah, I didn't know about that :) [20:42] yes, I need to make a webserver public [20:42] is the 127.0.0.1 correct? [20:42] that sounds iffy [20:42] would my internal nginx actually see the remote IP correctly? [20:44] I don't think it will [20:45] hmm [20:45] that's bad [20:45] how to fix that [20:46] znf: you can use the PROXY protocol which NGINX supports, see https://linuxcontainers.org/lxd/docs/master/instances/?highlight=proxy#type-proxy [20:46] yep. with a proxy on the host you should be able to forward the remote IP [20:48] very confused [20:48] traefik or varnish are popular here too [20:48] iptables DNAT might be an option, perhaps simpler [20:48] I don't really want to setup haproxy/traefik etc. [20:49] I'd like the stuff to be done via LXC/LXD stuff entirely [20:49] that is usually what you do with containers [20:49] you could perhaps launch the container in a network that is exposed already [20:49] or you add the containers to a bridge with your main network interface [20:49] yeah, that [20:49] It's way too much trouble/effort when I only need 1 port and 1 container [20:50] I'd agree if I run something much more complex [20:50] but to setup everything just for 1 host and 1 port seems like an overkill [20:50] (I also don't have a 2nd public IP address) [20:51] is this a dedicated server in a datacenter or whats the situation? [20:51] Yes [20:51] with only one public IP a reverse proxy is the best way to go here really [20:52] the setup is not that difficult [20:52] znf: the LXD native proxy thing is what supports the PROXY protocol if you want that, a simpler way is the `nat=true` one but it requires a static IP on the container/VM side [20:53] ravage, I know it's not, but I'm setting this up for someone else, I'll not handle it day by day, and I don't want to explain/amke it more complicated [20:53] znf: I mean, there is no need for any external components like traefik/haproxy [20:54] znf, did you test what IP your webserver actually logs with the lxc proxy command? [20:54] ravage, yeah, 127.0.0.1 [20:55] I'll try the nat=true stuff [20:55] isn't the real IP set in a header actually? You might just need to tweak the server log format string [20:56] # lxc config device set container eth0 ipv4.address=10.213.9.6 [20:56] Error: Device from profile(s) cannot be modified for individual instance. Override device or modify profile instead [20:57] ahasenack, no, there's no "header" being sent by the browser/device itself [20:58] the proxy usually injects such a header [20:58] a real proxy, I mean, I don't know what lxd is using [20:58] znf: try `device override` instead of `device set` [20:59] ah, right [20:59] ahasenack: you seem to refer to the PROXY protocol header, something that won't be used with `nat=true` [21:00] the proxy device supports quite a few different things which can be confusing :/ [21:01] znf: make sure the expected address shows up in the instance as I'm not sure that can be applied "live" [21:01] it was already that IP address [21:01] just made sure it was permanent [21:02] ah, ok, proxy_protocol=true won't really work out, because there's no ssl certificate :P [21:03] and with http:// I get a 400 bad request from the nginx running inside [21:03] NAT time I guess [21:04] znf: `proxy_protocol=true` only cares about TCP stuff, no SSL involved IIRC [21:05] znf: if you use `proxy_protocol=true` you need to tell NGINX that it will receive this protocol instead of regular HTTP(S) [21:05] yeah, screw that [21:05] nat=true it is [21:06] should work and have one less moving part ;) [21:06] yup [21:06] that's my goal, less moving parts, less things to break [21:10] (and less chances this guy will ask me for help in the future!) [21:16] engineering oneself out of a job, I like it ;) [21:17] I like to NOT be bothered all the time :P [21:22] There's no place like ::1 [21:23] :) [21:35] I prefer 127.0.0.1 [21:35] I'm old, get off my lawn with ipv6 [21:39] surprisingly, ::1 is a much smaller home than 127.0.0.1 (/128 vs /8) [21:42] why can't I nat=tcp:0.0.0.0 [21:42] damn it [21:43] I mean listen on 0.0.0.0 with nat [21:43] znf: what if you add a port to it? [21:43] nope [21:44] (I already do that) [21:44] # lxc config device add container port2082 proxy listen=tcp:0.0.0.0:2083 connect=tcp:10.213.9.6:2082 nat=true [21:44] Error: Invalid devices: Device validation failed for "port2083": Cannot listen on wildcard address "0.0.0.0" when in nat mode [21:44] dang [21:44] kinda weird [21:49] hm [21:49] are containers NOT set to auto-start? [21:55] znf: the state of the instances is preserved throughout hosts reboots (running instances will be restarted on boot) [21:56] config ... get boot.autostart returns empty [21:56] good to know then [21:57] I'll still test by rebooting