cpaelzer | bryyce: thanks, one more resolved \o/ | 05:29 |
---|---|---|
cpaelzer | name mapping error ... too early, thanks sergiodj :-) | 05:29 |
ExeciN | Is there a way to use cloud-init with a custom configuration without rebuilding the installation image? Can I somehow supply a url with the cloud-init configuration in the kernel command? (probably doing this through the installer's grub menu) | 07:39 |
tomreyn | ExeciN: you can use kernel command line autoinstall ds=nocloud-net;s=http://mywebserver.mytld:myport/ and have the web server serve your cloud-config at the http://mywebserver.mytld:myport/user-data (and an empty /meta-data). note i'm just another user, not a developer. | 10:56 |
blackroot | tomreyn | ExeciN: you can use kernel command line autoinstall ds=nocloud-net;s=http://mywebserver.mytld:myport/ and have the web server serve your cloud-config at the http://mywebserver.mytld:myport/user-data (and an empty /meta-data). note i'm just another user, not a developer. | 12:27 |
blackroot | since you were disconnected when he wrote this :P | 12:28 |
=== schopin_ is now known as schopin | ||
ExeciN | thanks tomreyn and blackroot | 15:09 |
znf | If I want to run a single LXC Container, on a single host, should I even bother with "LXD"? | 20:12 |
sarnold | as I understand it, lxd lets you do image-based container things; lxc lets you do recpie-based container things; I think I'd pick whichever one best matches your preferred working style | 20:13 |
ravage | the quick answer is yes. its just more convenient | 20:14 |
ravage | on my Pi4 i actually use LXC. it works too of course | 20:15 |
ravage | but on my ubuntu desktop i run lxd | 20:15 |
znf | Every time I want to do LXC/LXD I'm met with this issue being so unclear | 20:16 |
znf | And most stuff on the web is a very hit & miss in regards to instructions on both | 20:16 |
znf | It really mixes them up together | 20:16 |
ravage | and what issue is that? | 20:16 |
znf | If you google for "do stuff with lxc" | 20:16 |
znf | you'll get instructions with the "lxc" command, but that actually is part of the lxd package | 20:17 |
ravage | "lxd init" asks some basic questions and then you can start containers | 20:17 |
ravage | after you setup lxd almost all commands start with lxc | 20:17 |
ravage | https://linuxcontainers.org/lxd/try-it/ | 20:17 |
sarnold | znf: heh, yeah, the 'lxc' command to use lxd is *very* frustrating imho :( | 20:18 |
znf | also, what about the lxc packages | 20:18 |
znf | do I need to install any package separately, with apt, or should I just snap install lxd | 20:19 |
ravage | the lxd snap is all your need | 20:19 |
ravage | *you | 20:19 |
znf | now, wonder if I can actually run lxd 5 with a 18.04 kernel | 20:19 |
ravage | i guess you can run the stable version with 18.04 | 20:20 |
ravage | never tried it. i started using it on 20.04 | 20:20 |
znf | Name of the storage backend to use (zfs, btrfs, ceph, cephobject, dir, lvm) [default=zfs]: | 20:21 |
znf | why the hell does this even default to zfs lol | 20:21 |
ravage | it creates an image storage with zfs i think | 20:21 |
ravage | to enable snapshots and so on | 20:22 |
znf | and what does it actually use? | 20:22 |
znf | ooooh | 20:22 |
znf | it does a loop device | 20:22 |
znf | that's crazy | 20:22 |
ravage | i actually have a zfs pool on my system | 20:23 |
ravage | but the loop device works just fine | 20:23 |
sdeziel | znf: yes, no problem in using LXD 5.0 on 18.04 with whatever kernel | 20:25 |
znf | oh, wtf | 20:25 |
znf | lxc console attaches to the login console | 20:25 |
znf | what's the equivalent of lxc-attach -n | 20:26 |
sdeziel | znf: you probably want `lxc shell foo` to give you a bash login | 20:26 |
ravage | znf, lxc exec yourcontainer -- bash | 20:26 |
znf | ah, 'shell' is fine | 20:26 |
ravage | shell exists too | 20:26 |
ravage | ^^ | 20:26 |
sdeziel | both are very similar | 20:26 |
znf | it doesn't list "shell" when you -h | 20:26 |
sdeziel | znf: that's because `shell` is in fact an alias | 20:27 |
znf | I see | 20:27 |
sdeziel | znf: it invokes `su -l` inside the instance | 20:27 |
znf | great! | 20:27 |
ravage | i dont know how big that loop device is by default | 20:28 |
znf | I went with 'dir' | 20:28 |
ravage | ok | 20:28 |
znf | I won't need snapshots on this | 20:28 |
ravage | i love my snapshots :D | 20:28 |
znf | I have LVM but someone assigned all the space to / already | 20:28 |
sdeziel | znf: https://linuxcontainers.org/lxd/docs/master/reference/storage_drivers/ | 20:29 |
sdeziel | that table should give you a nice overview of what LXD supports in terms of storage | 20:29 |
sdeziel | znf: another selling point in favor of LXD is that is supports VMs too ;) | 20:30 |
sdeziel | s/that is/that it/ | 20:30 |
ravage | i never tried that before | 20:30 |
ravage | i really should | 20:30 |
znf | I'm familiar with them, but I feel dirty about using a loop device | 20:30 |
ravage | but i usually only run them on my desktop and virt-manager does a good job atm | 20:30 |
znf | and because it's just 1 single container, I'm meh about worrying about them | 20:30 |
znf | now, where's that fancy volume mapping | 20:31 |
sdeziel | znf: yeah, ideally you'd carve up some dedicated space but the loop device actually perform quite nicely | 20:31 |
sdeziel | ravage: my main grip with virt-manager is the XML part | 20:32 |
znf | if *someone* didn't allocate all VG space to a single LV... >_< | 20:32 |
ravage | sdeziel, yes XML is always annoying :) | 20:32 |
znf | k, now, how do I map the host /home/stuff to the guest /home/stuff ? | 20:32 |
sdeziel | znf: ext4 supports shrinking if you use a liveusb | 20:32 |
znf | I don't have IPMI access on it | 20:32 |
znf | and it's located far far away | 20:33 |
znf | too much of a hassle | 20:33 |
znf | :) | 20:33 |
sdeziel | granted | 20:33 |
znf | lxc config device add c1 sharedwww disk source=/wwwdata/ path=/var/www/html/ | 20:33 |
znf | that much? | 20:33 |
sdeziel | znf: sounds about right | 20:33 |
ravage | ok. i know im just lazy. but how do i fix "LXD VM agent isn't currently running" ? :D | 20:34 |
ahasenack | for a vm? Wait a bit | 20:34 |
sdeziel | ravage: just wait till the VM finishes booting is usually what you need to do | 20:34 |
ravage | oh ok. i was just too fast then :D | 20:34 |
ravage | yep. thx | 20:34 |
ahasenack | it's indeed slower than non-vms | 20:34 |
ahasenack | and most of the time ssh is ready before the agent, so you can already ssh in if you have the creds in place | 20:35 |
ravage | im usually fine with containers. but its nice to have the VM option | 20:35 |
sdeziel | ravage: you can always do `lxc launch --console=vga --vm ...` | 20:35 |
ravage | in case of some kernel related thing a VM is good to have | 20:35 |
ahasenack | it's indeed a quick and very convenient way to launch one | 20:37 |
ahasenack | not just ubuntu | 20:37 |
ahasenack | basically any os out there (linux based) | 20:37 |
ahasenack | just use the "images:" remote | 20:38 |
sdeziel | yeah, for other OSes, you have to provide the ISO usually as we cannot distribute ready-made Windows VMs :/ | 20:38 |
ahasenack | just take a look at all that is available in "lxc image list images:" | 20:39 |
znf | maybe that will finally change now that MS was forced to do the licensing-per-core modification | 20:39 |
ahasenack | centos, alpine, fedora, gentoo (!), opensuse, etc | 20:39 |
ahasenack | some I have never heard of even | 20:39 |
znf | what's the tcp proxy thingie called | 20:40 |
znf | so I can proxy ports on the host to the container | 20:40 |
znf | lxc config device add mycontainer myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 | 20:41 |
znf | this sounds ok | 20:41 |
ravage | You only need that if you need external Access | 20:42 |
ahasenack | hah, I didn't know about that :) | 20:42 |
znf | yes, I need to make a webserver public | 20:42 |
znf | is the 127.0.0.1 correct? | 20:42 |
znf | that sounds iffy | 20:42 |
znf | would my internal nginx actually see the remote IP correctly? | 20:42 |
ravage | I don't think it will | 20:44 |
znf | hmm | 20:45 |
znf | that's bad | 20:45 |
znf | how to fix that | 20:45 |
sdeziel | znf: you can use the PROXY protocol which NGINX supports, see https://linuxcontainers.org/lxd/docs/master/instances/?highlight=proxy#type-proxy | 20:46 |
ravage | yep. with a proxy on the host you should be able to forward the remote IP | 20:46 |
znf | very confused | 20:48 |
ravage | traefik or varnish are popular here too | 20:48 |
ahasenack | iptables DNAT might be an option, perhaps simpler | 20:48 |
znf | I don't really want to setup haproxy/traefik etc. | 20:48 |
znf | I'd like the stuff to be done via LXC/LXD stuff entirely | 20:49 |
ravage | that is usually what you do with containers | 20:49 |
ahasenack | you could perhaps launch the container in a network that is exposed already | 20:49 |
ravage | or you add the containers to a bridge with your main network interface | 20:49 |
ahasenack | yeah, that | 20:49 |
znf | It's way too much trouble/effort when I only need 1 port and 1 container | 20:49 |
znf | I'd agree if I run something much more complex | 20:50 |
znf | but to setup everything just for 1 host and 1 port seems like an overkill | 20:50 |
znf | (I also don't have a 2nd public IP address) | 20:50 |
ravage | is this a dedicated server in a datacenter or whats the situation? | 20:51 |
znf | Yes | 20:51 |
ravage | with only one public IP a reverse proxy is the best way to go here really | 20:51 |
ravage | the setup is not that difficult | 20:52 |
sdeziel | znf: the LXD native proxy thing is what supports the PROXY protocol if you want that, a simpler way is the `nat=true` one but it requires a static IP on the container/VM side | 20:52 |
znf | ravage, I know it's not, but I'm setting this up for someone else, I'll not handle it day by day, and I don't want to explain/amke it more complicated | 20:53 |
sdeziel | znf: I mean, there is no need for any external components like traefik/haproxy | 20:53 |
ravage | znf, did you test what IP your webserver actually logs with the lxc proxy command? | 20:54 |
znf | ravage, yeah, 127.0.0.1 | 20:54 |
znf | I'll try the nat=true stuff | 20:55 |
ahasenack | isn't the real IP set in a header actually? You might just need to tweak the server log format string | 20:55 |
znf | # lxc config device set container eth0 ipv4.address=10.213.9.6 | 20:56 |
znf | Error: Device from profile(s) cannot be modified for individual instance. Override device or modify profile instead | 20:56 |
znf | ahasenack, no, there's no "header" being sent by the browser/device itself | 20:57 |
ahasenack | the proxy usually injects such a header | 20:58 |
ahasenack | a real proxy, I mean, I don't know what lxd is using | 20:58 |
sdeziel | znf: try `device override` instead of `device set` | 20:58 |
znf | ah, right | 20:59 |
sdeziel | ahasenack: you seem to refer to the PROXY protocol header, something that won't be used with `nat=true` | 20:59 |
sdeziel | the proxy device supports quite a few different things which can be confusing :/ | 21:00 |
sdeziel | znf: make sure the expected address shows up in the instance as I'm not sure that can be applied "live" | 21:01 |
znf | it was already that IP address | 21:01 |
znf | just made sure it was permanent | 21:01 |
znf | ah, ok, proxy_protocol=true won't really work out, because there's no ssl certificate :P | 21:02 |
znf | and with http:// I get a 400 bad request from the nginx running inside | 21:03 |
znf | NAT time I guess | 21:03 |
sdeziel | znf: `proxy_protocol=true` only cares about TCP stuff, no SSL involved IIRC | 21:04 |
sdeziel | znf: if you use `proxy_protocol=true` you need to tell NGINX that it will receive this protocol instead of regular HTTP(S) | 21:05 |
znf | yeah, screw that | 21:05 |
znf | nat=true it is | 21:05 |
sdeziel | should work and have one less moving part ;) | 21:06 |
znf | yup | 21:06 |
znf | that's my goal, less moving parts, less things to break | 21:06 |
znf | (and less chances this guy will ask me for help in the future!) | 21:10 |
sdeziel | engineering oneself out of a job, I like it ;) | 21:16 |
znf | I like to NOT be bothered all the time :P | 21:17 |
RoyK | There's no place like ::1 | 21:22 |
sarnold | :) | 21:23 |
znf | I prefer 127.0.0.1 | 21:35 |
znf | I'm old, get off my lawn with ipv6 | 21:35 |
sdeziel | surprisingly, ::1 is a much smaller home than 127.0.0.1 (/128 vs /8) | 21:39 |
znf | why can't I nat=tcp:0.0.0.0 | 21:42 |
znf | damn it | 21:42 |
znf | I mean listen on 0.0.0.0 with nat | 21:43 |
sdeziel | znf: what if you add a port to it? | 21:43 |
znf | nope | 21:43 |
znf | (I already do that) | 21:44 |
znf | # lxc config device add container port2082 proxy listen=tcp:0.0.0.0:2083 connect=tcp:10.213.9.6:2082 nat=true | 21:44 |
znf | Error: Invalid devices: Device validation failed for "port2083": Cannot listen on wildcard address "0.0.0.0" when in nat mode | 21:44 |
sdeziel | dang | 21:44 |
znf | kinda weird | 21:44 |
znf | hm | 21:49 |
znf | are containers NOT set to auto-start? | 21:49 |
sdeziel | znf: the state of the instances is preserved throughout hosts reboots (running instances will be restarted on boot) | 21:55 |
znf | config ... get boot.autostart returns empty | 21:56 |
znf | good to know then | 21:56 |
znf | I'll still test by rebooting | 21:57 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!