[06:04] <lordievader> Good morning
[11:01] <computer2000> Hi guys can anyone please advice on my issue here https://stackoverflow.com/questions/64351703/apache-virtual-hosts-on-ubuntu-20-04-vps
[13:11] <coreycb> icey: your neutron/nova/ceilometer point releases are now released for stein, train, and ussuri
[13:11] <icey> thanks coreycb!
[13:12] <coreycb> icey: also, down to 1 tempest failure on groovy-victoria (octavia)
[13:12] <icey> \o/
[13:47] <lordievader> computer2000: IIRC vhost only work on hostnames/cnames, not on url paths.
[14:20] <teward> computer2000: lordievader: VHosts only work at the hostname/cname/Host header level
[14:21] <teward> so if you have six sites (a.com b.com c.com ...) on your system you have six VHost entries one for each hostname you want to match with the proper configuration for server name, and then depending on the Host header received from the browser/client accessing the site, APache'll serve the proper site configuration (VHost)
[14:21] <lordievader> Yeah, so what he is proposing won't work with  vhosts
[14:33] <teward> well... kinda
[14:33] <teward> he can link a URI to a specific docroot/project but not to a specific VHost
[14:33] <teward> but most CMSes don't want to behave that way
[14:33] <teward> and want one singular hostname/vhost to a specific CMS
[14:33] <teward> so no, what you're trying to do computer2000 won't work
[14:34] <teward> it needs specific hostname matches in the Host header for that to work.
[14:34] <computer2000> teward: i "kind of" got it to work according to https://stackoverflow.com/questions/26706846/apache-virtual-host-without-domain-name
[14:35] <teward> right, but that's not the same as configuring your CMSes to do that proper
[14:35] <teward> but you can't tie it to the VHost, you can only tie it to the locations underneatn the VHost
[14:35] <teward> to tie each CMS/project to a specific VHost match requires DNS, CNAMEs, and Hostname matches to requests
[14:41] <computer2000> teward: so, how do you make multiple cms installs accessible on one server without having subdomains for each?
[14:41] <teward> usually?
[14:41] <teward> you don't
[14:41] <teward> this said
[14:41] <teward> you *can* have subdomains for each and then have manual /etc/hosts entries on your local computer to make 'fake hostnames' for the server to match the Host header against so you can access each CMS
[14:42] <teward> ... or the painful nasty evil way I do it is individual VHosts on individual ports and then http://ipaddress:port/ with different ports to access each CMS/endpoint
[14:43] <computer2000> :/
[14:43] <computer2000> thanks
[17:26] <catphish> what is the format of the cloudimg ".img" file?
[17:50] <rfm> catphish, it's a QCOW2 (at least the focal-server-cloudimg-amd64.img I downloaded is)
[17:52] <catphish> rfm: thanks, i was a little confused whether it was diffeent from the disk-kvm.img
[21:19] <keithzg> Hmm from what I can tell the default dist-provided /etc/knockd.conf still gives the example command as `/sbin/iptables` in 20.04, but iptables isn't actually there anymore?
[21:23] <sarnold> lrwxrwxrwx 1 root root 26 Jan 24  2020 /sbin/iptables -> /etc/alternatives/iptables
[21:23] <sarnold> lrwxrwxrwx 1 root root 25 Jan 24  2020 /etc/alternatives/iptables -> /usr/sbin/iptables-legacy
[21:23] <sarnold> that's my focal laptop
[21:26] <keithzg> Fair enough, sarnold! Looking closer I see my /etc/knockd.conf.dist might be an older lingering config. Obvious and simple fix anyways :)
[21:27] <keithzg> Other than that and systemd-resolved coming online and thus completely hosing dnsmasq and thus the entire network until I disabled it, upgrading the office router to 20.04 seems to have gone very smoothly :)
[21:30] <keithzg> (I do wonder why I didn't have the equivalent symlink in /sbin, but c'est la vie; I do now, heh)
[21:53] <sdeziel> keithzg: the /sbin/ -> /usr/sbin/ symlink is not created during do-release-upgrades
[21:53] <sdeziel> which sucks and caused me a bunch of problems ;)
[22:05] <sarnold> keithzg: aha; you're back up though, right?
[22:47] <keithzg> sarnold: Yup yup, was back up pretty quickly in fact since a quick `lsof` showed who was hogging port 53, and this wasn't the first time I've had systemd-resolved show up uninvited so I wasn't too surprised :P
[22:50] <sarnold> keithzg: woot