[10:18] Hi guys [12:22] hey guys [12:23] right now i have 2 application running that wants port 80 [12:24] how do i control which one to use? [12:24] is there like a switch? [12:25] technoob_: you could have only one start [12:25] technoob_: or you could tell one to use a different port [12:25] technoob_: or you could keep them binding port 80 but on different IPs [12:30] sdeziel: how do i tell one to use different port? [12:32] technoob_: each application is different [12:33] technoob_: what are the 2 ones you are using? [12:33] i think its because of nginx [12:33] nextcloud and erpnext [12:33] both uses nginx i think [12:34] technoob_: have you used the snap to install nextcloud? [12:34] yes [12:35] technoob_: check the readme in https://github.com/nextcloud/nextcloud-snap they explain how to bind a different port [12:36] can i ask you guys if i understand correctly [12:36] so nginx is the one taht uses port 80 right? [12:37] and theres two instance that uses port 80 [12:37] technoob_: I don't know what erpnext uses as web server but nextcloud's snap ships with apache2. [12:37] so nginx is the one who determines which of those 2 gets the port 80 right? [12:37] oh [12:37] i see [12:37] technoob_: not exactly, who ever gets to bing port 80 is the service that first starts [12:37] erpnext is nginx [12:38] and nextcloud is apache [12:38] both wants to bind to port 80 by default [12:38] only one succeeds and it's the first that tried to bind that won the race [12:40] oh [12:40] so in that case nginx won right? [12:44] technoob_: you can check which application binds which ports with "sudo ss -nltp" [12:45] its the one with 0.0.0.0:80 [12:45] right? [12:45] 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=1256,fd=6),("nginx",pid=1255,fd=6),("nginx",pid=1254,fd=6)) [12:50] sdeziel: [12:58] technoob_: so yeah, that confirms that nginx won the race [12:58] ok [12:58] i changed the port on nextcloud but now it says its in maintenance mode [12:58] lol [12:58] oh wait [12:58] its ok [12:58] i can now get in [12:58] i see the login page [12:59] why is port 80 the most popular lol [12:59] Hi... I just installed a lxc VM, it seems that the way ubuntu 18.04 sets up the network, as /etc/netplan doesn't exists and netplan isn't installed. How do I setup the network? I want to setup a static IP. [13:01] sdeziel: [13:01] technoob_: that's the default port for HTTP [13:01] oh [13:01] okies [13:03] neildugan: I would have expected that all 18.04 official image were using netplan but maybe yours still has an ifupdown setup? Check /etc/network/interfaces maybe? [13:15] sdeziel, the file cat /etc/network/interfaces.d/50-cloud-init.cfg says "..."Changes to it will not persist across an instance ... To disable cloud-init's network configuration capabilities .. " [13:17] neildugan: OK so that's still using ifupdown. Follow the instructions to disable cloud-init's network config and then your static configuration should stick through reboots [13:24] Odd its not useing netplan [14:05] try /etc/netplan/50-cloud-init.yaml [14:05] and after that; netplan apply [14:10] hi [14:10] So I want to do run some ansible upgrade on a system [14:10] but before doing that I want to test it locally [14:10] How can I create a snapshot of an existing Ubuntu system that can be booted in a VM? [14:11] you can create an image, import that to the VM infrastructure, and do snapshotting there (if supported), if that's what you mean. [14:13] most virtualization solutions provde some "physical to virtual" (p2v) import mechanism. or you could just create an image file using dd or similar utilities, and import that. [14:14] in the end it's probably rather a support question for the channel of the virtualization you'll be using. [14:14] what hypervisor ? [14:15] phobosoph: ^ [14:15] if vmware, they have a P2V program [14:25] Usually you don't have to go that far. [14:25] If you take a filesystem copy of an Ubuntu system, it should generally work in a VM as soon as you're sorted out the bootloader. [14:25] (and /etc/fstab, etc) [14:26] And the initial ramdisk (man update-initramfs) [14:26] can I also downloda the stuff directly over SSH? [14:26] the existing free disk space is not enough for putting the image onto [14:26] Of course doing it exactly will find edge case problems with the upgrade that might be lurking in there. [14:27] You could just grab the real (as opposed to /proc, /sys etc) filesystems via rsync or something. [14:27] That'd give you a reasonable approximation to find any issues that exist in userspace with your proposed upgrade. [14:27] It is a little involved in making the result bootable though, and I don't really have the time to walk you through that. [14:27] Updating /etc/fstab, bootloader, initramfs, etc. [14:28] rbasak: I could rsync/sftp the file system down to an empty mounted virtual disk and then boot from it afterwards in a VM? [14:28] That's what I'm saying, yes, but there are things you need to fix up before it'll work. [14:31] hm, nice [14:34] P2V [15:16] Ussat: can P2V also run over SSH/SFTP? [15:17] Its been years since I have used it, I am not sure [15:17] almost everything I build now is a VM [15:18] cool [15:19] So, youre testing ansible based update ? [15:19] I have been useing ansible to update all my systems for some time...whats the question [15:24] Ussat: I updated my ansible playbooks from upstream (roots.io trellis) and now I want to be sure it can update/apply on an existing system [15:24] ideally I could test it as a VM frist [15:24] not that something goes wrong and the system is left in an undetermined/broken state [15:25] I would just build a quick VM then [15:25] I did, but the VM is not the same as the production system [15:25] ahh fair nuff [15:25] yes :/ [15:26] hm, I think about creating a new virtual server as production server, transfer everything onto and kill the old one [15:27] That would be the best IMHO [15:32] what's the easiest way to run a very simple python script periodically on boot via an AMI on aws? Do I need a systemd/upstart or can I just add a script to the crontab? [15:33] i have a running docker swarm cluster, and I'd like to add a cluster maintenance task to the manager nodes to run every 5 minutes [15:35] either works. a systemd timer activating a systemd service would probably be the proper approach. [15:36] i'd recommend against upstart (it's still possible to use it but not the right way on any supported releases anymore) [16:35] one other detail.. if I install the systemd service, will I still need to 'start' it on boot or will all installed ones start on boot [16:36] trying to engineer it so I have one AMI, and can 'start' the management service only on managers [16:37] ayyy, dput via SFTP works. *laughs evilly* [16:38] jayjo: you need to "systemctl enable" the service [16:39] so I can just do that in the userdata of the manager nodes? [16:39] but this may clash with a timer, make sure that's not the case. [16:41] only enabling the configured service / timer on the manager nodes should be possible, yes, [16:42] (i've not had to do this, yet.) [16:56] so I install it in the AMI and then calling "systemctl enable" should only be run on the manager nodes because this is what starts the service [16:56] I only would call 'start' if i called 'stop' or something else? [17:01] jayjo: enable does not start unless you run with --now [17:02] hmm, so would I run 'systemctl enable ClusterMaintenance.service' on all nodes (to install) and only 'systemctl start ClusterMaintenance' on the manager nodes? [17:10] jayjo: sorry, i wasn't reading the other bits, just wanted to point that out [17:11] jayjo: this sounds like it could work for your use case, yes. this doesn't cover the regular activation (timer/cron job) vector you had in mind initially, though. [17:13] tomreyn: I think that I will just re-factor the python script to run forever and handle the timing within the script, as long as the system will monitor it and re-run it if it fails [17:14] i assumed you would ;)