technoob | Hi guys | 10:18 |
---|---|---|
technoob_ | hey guys | 12:22 |
technoob_ | right now i have 2 application running that wants port 80 | 12:23 |
technoob_ | how do i control which one to use? | 12:24 |
technoob_ | is there like a switch? | 12:24 |
sdeziel | technoob_: you could have only one start | 12:25 |
sdeziel | technoob_: or you could tell one to use a different port | 12:25 |
sdeziel | technoob_: or you could keep them binding port 80 but on different IPs | 12:25 |
technoob_ | sdeziel: how do i tell one to use different port? | 12:30 |
sdeziel | technoob_: each application is different | 12:32 |
sdeziel | technoob_: what are the 2 ones you are using? | 12:33 |
technoob_ | i think its because of nginx | 12:33 |
technoob_ | nextcloud and erpnext | 12:33 |
technoob_ | both uses nginx i think | 12:33 |
sdeziel | technoob_: have you used the snap to install nextcloud? | 12:34 |
technoob_ | yes | 12:34 |
sdeziel | technoob_: check the readme in https://github.com/nextcloud/nextcloud-snap they explain how to bind a different port | 12:35 |
technoob_ | can i ask you guys if i understand correctly | 12:36 |
technoob_ | so nginx is the one taht uses port 80 right? | 12:36 |
technoob_ | and theres two instance that uses port 80 | 12:37 |
sdeziel | technoob_: I don't know what erpnext uses as web server but nextcloud's snap ships with apache2. | 12:37 |
technoob_ | so nginx is the one who determines which of those 2 gets the port 80 right? | 12:37 |
technoob_ | oh | 12:37 |
technoob_ | i see | 12:37 |
sdeziel | technoob_: not exactly, who ever gets to bing port 80 is the service that first starts | 12:37 |
technoob_ | erpnext is nginx | 12:37 |
technoob_ | and nextcloud is apache | 12:38 |
sdeziel | both wants to bind to port 80 by default | 12:38 |
sdeziel | only one succeeds and it's the first that tried to bind that won the race | 12:38 |
technoob_ | oh | 12:40 |
technoob_ | so in that case nginx won right? | 12:40 |
sdeziel | technoob_: you can check which application binds which ports with "sudo ss -nltp" | 12:44 |
technoob_ | its the one with 0.0.0.0:80 | 12:45 |
technoob_ | right? | 12:45 |
technoob_ | 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=1256,fd=6),("nginx",pid=1255,fd=6),("nginx",pid=1254,fd=6)) | 12:45 |
technoob_ | sdeziel: | 12:50 |
sdeziel | technoob_: so yeah, that confirms that nginx won the race | 12:58 |
technoob_ | ok | 12:58 |
technoob_ | i changed the port on nextcloud but now it says its in maintenance mode | 12:58 |
technoob_ | lol | 12:58 |
technoob_ | oh wait | 12:58 |
technoob_ | its ok | 12:58 |
technoob_ | i can now get in | 12:58 |
technoob_ | i see the login page | 12:58 |
technoob_ | why is port 80 the most popular lol | 12:59 |
neildugan | Hi... I just installed a lxc VM, it seems that the way ubuntu 18.04 sets up the network, as /etc/netplan doesn't exists and netplan isn't installed. How do I setup the network? I want to setup a static IP. | 12:59 |
technoob_ | sdeziel: | 13:01 |
sdeziel | technoob_: that's the default port for HTTP | 13:01 |
technoob_ | oh | 13:01 |
technoob_ | okies | 13:01 |
sdeziel | neildugan: I would have expected that all 18.04 official image were using netplan but maybe yours still has an ifupdown setup? Check /etc/network/interfaces maybe? | 13:03 |
neildugan | sdeziel, the file cat /etc/network/interfaces.d/50-cloud-init.cfg says "..."Changes to it will not persist across an instance ... To disable cloud-init's network configuration capabilities .. " | 13:15 |
sdeziel | neildugan: OK so that's still using ifupdown. Follow the instructions to disable cloud-init's network config and then your static configuration should stick through reboots | 13:17 |
Ussat | Odd its not useing netplan | 13:24 |
OerHeks | try /etc/netplan/50-cloud-init.yaml | 14:05 |
OerHeks | and after that; netplan apply | 14:05 |
phobosoph | hi | 14:10 |
phobosoph | So I want to do run some ansible upgrade on a system | 14:10 |
phobosoph | but before doing that I want to test it locally | 14:10 |
phobosoph | How can I create a snapshot of an existing Ubuntu system that can be booted in a VM? | 14:10 |
tomreyn | you can create an image, import that to the VM infrastructure, and do snapshotting there (if supported), if that's what you mean. | 14:11 |
tomreyn | most virtualization solutions provde some "physical to virtual" (p2v) import mechanism. or you could just create an image file using dd or similar utilities, and import that. | 14:13 |
tomreyn | in the end it's probably rather a support question for the channel of the virtualization you'll be using. | 14:14 |
Ussat | what hypervisor ? | 14:14 |
tomreyn | phobosoph: ^ | 14:15 |
Ussat | if vmware, they have a P2V program | 14:15 |
rbasak | Usually you don't have to go that far. | 14:25 |
rbasak | If you take a filesystem copy of an Ubuntu system, it should generally work in a VM as soon as you're sorted out the bootloader. | 14:25 |
rbasak | (and /etc/fstab, etc) | 14:25 |
rbasak | And the initial ramdisk (man update-initramfs) | 14:26 |
phobosoph | can I also downloda the stuff directly over SSH? | 14:26 |
phobosoph | the existing free disk space is not enough for putting the image onto | 14:26 |
rbasak | Of course doing it exactly will find edge case problems with the upgrade that might be lurking in there. | 14:26 |
rbasak | You could just grab the real (as opposed to /proc, /sys etc) filesystems via rsync or something. | 14:27 |
rbasak | That'd give you a reasonable approximation to find any issues that exist in userspace with your proposed upgrade. | 14:27 |
rbasak | It is a little involved in making the result bootable though, and I don't really have the time to walk you through that. | 14:27 |
rbasak | Updating /etc/fstab, bootloader, initramfs, etc. | 14:27 |
phobosoph | rbasak: I could rsync/sftp the file system down to an empty mounted virtual disk and then boot from it afterwards in a VM? | 14:28 |
rbasak | That's what I'm saying, yes, but there are things you need to fix up before it'll work. | 14:28 |
phobosoph | hm, nice | 14:31 |
Ussat | P2V | 14:34 |
phobosoph | Ussat: can P2V also run over SSH/SFTP? | 15:16 |
Ussat | Its been years since I have used it, I am not sure | 15:17 |
Ussat | almost everything I build now is a VM | 15:17 |
phobosoph | cool | 15:18 |
Ussat | So, youre testing ansible based update ? | 15:19 |
Ussat | I have been useing ansible to update all my systems for some time...whats the question | 15:19 |
phobosoph | Ussat: I updated my ansible playbooks from upstream (roots.io trellis) and now I want to be sure it can update/apply on an existing system | 15:24 |
phobosoph | ideally I could test it as a VM frist | 15:24 |
phobosoph | not that something goes wrong and the system is left in an undetermined/broken state | 15:24 |
Ussat | I would just build a quick VM then | 15:25 |
phobosoph | I did, but the VM is not the same as the production system | 15:25 |
Ussat | ahh fair nuff | 15:25 |
phobosoph | yes :/ | 15:25 |
phobosoph | hm, I think about creating a new virtual server as production server, transfer everything onto and kill the old one | 15:26 |
Ussat | That would be the best IMHO | 15:27 |
jayjo | what's the easiest way to run a very simple python script periodically on boot via an AMI on aws? Do I need a systemd/upstart or can I just add a script to the crontab? | 15:32 |
jayjo | i have a running docker swarm cluster, and I'd like to add a cluster maintenance task to the manager nodes to run every 5 minutes | 15:33 |
tomreyn | either works. a systemd timer activating a systemd service would probably be the proper approach. | 15:35 |
tomreyn | i'd recommend against upstart (it's still possible to use it but not the right way on any supported releases anymore) | 15:36 |
jayjo | one other detail.. if I install the systemd service, will I still need to 'start' it on boot or will all installed ones start on boot | 16:35 |
jayjo | trying to engineer it so I have one AMI, and can 'start' the management service only on managers | 16:36 |
teward | ayyy, dput via SFTP works. *laughs evilly* | 16:37 |
tomreyn | jayjo: you need to "systemctl enable" the service | 16:38 |
jayjo | so I can just do that in the userdata of the manager nodes? | 16:39 |
tomreyn | but this may clash with a timer, make sure that's not the case. | 16:39 |
tomreyn | only enabling the configured service / timer on the manager nodes should be possible, yes, | 16:41 |
tomreyn | (i've not had to do this, yet.) | 16:42 |
jayjo | so I install it in the AMI and then calling "systemctl enable" should only be run on the manager nodes because this is what starts the service | 16:56 |
jayjo | I only would call 'start' if i called 'stop' or something else? | 16:56 |
nacc | jayjo: enable does not start unless you run with --now | 17:01 |
jayjo | hmm, so would I run 'systemctl enable ClusterMaintenance.service' on all nodes (to install) and only 'systemctl start ClusterMaintenance' on the manager nodes? | 17:02 |
nacc | jayjo: sorry, i wasn't reading the other bits, just wanted to point that out | 17:10 |
tomreyn | jayjo: this sounds like it could work for your use case, yes. this doesn't cover the regular activation (timer/cron job) vector you had in mind initially, though. | 17:11 |
jayjo | tomreyn: I think that I will just re-factor the python script to run forever and handle the timing within the script, as long as the system will monitor it and re-run it if it fails | 17:13 |
tomreyn | i assumed you would ;) | 17:14 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!