technoobHi guys10:18
technoob_hey guys12:22
technoob_right now i have 2 application running that wants port 8012:23
technoob_how do i control which one to use?12:24
technoob_is there like a switch?12:24
sdezieltechnoob_: you could have only one start12:25
sdezieltechnoob_: or you could tell one to use a different port12:25
sdezieltechnoob_: or you could keep them binding port 80 but on different IPs12:25
technoob_sdeziel: how do i tell one to use different port?12:30
sdezieltechnoob_: each application is different12:32
sdezieltechnoob_: what are the 2 ones you are using?12:33
technoob_i think its because of nginx12:33
technoob_nextcloud and erpnext12:33
technoob_both uses nginx i think12:33
sdezieltechnoob_: have you used the snap to install nextcloud?12:34
sdezieltechnoob_: check the readme in https://github.com/nextcloud/nextcloud-snap they explain how to bind a different port12:35
technoob_can i ask you guys if i understand correctly12:36
technoob_so nginx is the one taht uses port 80 right?12:36
technoob_and theres two instance that uses port 8012:37
sdezieltechnoob_: I don't know what erpnext uses as web server but nextcloud's snap ships with apache2.12:37
technoob_so nginx is the one who determines which of those 2 gets the port 80 right?12:37
technoob_i see12:37
sdezieltechnoob_: not exactly, who ever gets to bing port 80 is the service that first starts12:37
technoob_erpnext is nginx12:37
technoob_and nextcloud is apache12:38
sdezielboth wants to bind to port 80 by default12:38
sdezielonly one succeeds and it's the first that tried to bind that won the race12:38
technoob_so in that case nginx won right?12:40
sdezieltechnoob_: you can check which application binds which ports with "sudo ss -nltp"12:44
technoob_its the one with
technoob_                     *                users:(("nginx",pid=1256,fd=6),("nginx",pid=1255,fd=6),("nginx",pid=1254,fd=6))12:45
sdezieltechnoob_: so yeah, that confirms that nginx won the race12:58
technoob_i changed the port on nextcloud but now it says its in maintenance mode12:58
technoob_oh wait12:58
technoob_its ok12:58
technoob_i can now get in12:58
technoob_i see the login page12:58
technoob_why is port 80 the most popular lol12:59
neilduganHi... I just installed a lxc VM, it seems that the way ubuntu 18.04 sets up the network, as /etc/netplan doesn't exists and netplan isn't installed.  How do I setup the network? I want to setup a static IP.12:59
sdezieltechnoob_: that's the default port for HTTP13:01
sdezielneildugan: I would have expected that all 18.04 official image were using netplan but maybe yours still has an ifupdown setup? Check /etc/network/interfaces maybe?13:03
neildugansdeziel, the file cat /etc/network/interfaces.d/50-cloud-init.cfg  says "..."Changes to it will not persist across an instance ... To disable cloud-init's network configuration capabilities .. "13:15
sdezielneildugan: OK so that's still using ifupdown. Follow the instructions to disable cloud-init's network config and then your static configuration should stick through reboots13:17
UssatOdd its not useing netplan13:24
OerHekstry /etc/netplan/50-cloud-init.yaml14:05
OerHeksand after that; netplan apply14:05
phobosophSo I want to do run some ansible upgrade on a system14:10
phobosophbut before doing that I want to test it locally14:10
phobosophHow can I create a snapshot of an existing Ubuntu system that can be booted in a VM?14:10
tomreynyou can create an image, import that to the VM infrastructure, and do snapshotting there (if supported), if that's what you mean.14:11
tomreynmost virtualization solutions provde some "physical to virtual" (p2v) import mechanism. or you could just create an image file using dd or similar utilities, and import that.14:13
tomreynin the end it's probably rather a support question for the channel of the virtualization you'll be using.14:14
Ussatwhat hypervisor ?14:14
tomreynphobosoph: ^14:15
Ussatif vmware, they have a P2V program14:15
rbasakUsually you don't have to go that far.14:25
rbasakIf you take a filesystem copy of an Ubuntu system, it should generally work in a VM as soon as you're sorted out the bootloader.14:25
rbasak(and /etc/fstab, etc)14:25
rbasakAnd the initial ramdisk (man update-initramfs)14:26
phobosophcan I also downloda the stuff directly over SSH?14:26
phobosophthe existing free disk space is not enough for putting the image onto14:26
rbasakOf course doing it exactly will find edge case problems with the upgrade that might be lurking in there.14:26
rbasakYou could just grab the real (as opposed to /proc, /sys etc) filesystems via rsync or something.14:27
rbasakThat'd give you a reasonable approximation to find any issues that exist in userspace with your proposed upgrade.14:27
rbasakIt is a little involved in making the result bootable though, and I don't really have the time to walk you through that.14:27
rbasakUpdating /etc/fstab, bootloader, initramfs, etc.14:27
phobosophrbasak: I could rsync/sftp the file system down to an empty mounted virtual disk and then boot from it afterwards in a VM?14:28
rbasakThat's what I'm saying, yes, but there are things you need to fix up before it'll work.14:28
phobosophhm, nice14:31
phobosophUssat: can P2V also run over SSH/SFTP?15:16
UssatIts been years since I have used it, I am not sure15:17
Ussatalmost everything I build now is a VM15:17
UssatSo, youre testing ansible based update ?15:19
UssatI have been useing ansible to update all my systems for some time...whats the question15:19
phobosophUssat: I updated my ansible playbooks from upstream (roots.io trellis) and now I want to be sure it can update/apply on an existing system15:24
phobosophideally I could test it as a VM frist15:24
phobosophnot that something goes wrong and the system is left in an undetermined/broken state15:24
UssatI would just build a quick VM then15:25
phobosophI did, but the VM is not the same as the production system15:25
Ussatahh fair nuff15:25
phobosophyes :/15:25
phobosophhm, I think about creating a new virtual server as production server, transfer everything onto and kill the old one15:26
UssatThat would be the best IMHO15:27
jayjowhat's the easiest way to run a very simple python script periodically on boot via an AMI on aws? Do I need a systemd/upstart or can I just add a script to the crontab?15:32
jayjoi have a running docker swarm cluster, and I'd like to add a cluster maintenance task to the manager nodes to run every 5 minutes15:33
tomreyneither works. a systemd timer activating a systemd service would probably be the proper approach.15:35
tomreyni'd recommend against upstart (it's still possible to use it but not the right way on any supported releases anymore)15:36
jayjoone other detail.. if I install the systemd service, will I still need to 'start' it on boot or will all installed ones start on boot16:35
jayjotrying to engineer it so I have one AMI, and can 'start' the management service only on managers16:36
tewardayyy, dput via SFTP works.  *laughs evilly*16:37
tomreynjayjo: you need to "systemctl enable" the service16:38
jayjoso I can just do that in the userdata of the manager nodes?16:39
tomreynbut this may clash with a timer, make sure that's not the case.16:39
tomreynonly enabling the configured service / timer on the manager nodes should be possible, yes,16:41
tomreyn(i've not had to do this, yet.)16:42
jayjoso I install it in the AMI and then calling "systemctl enable" should only be run on the manager nodes because this is what starts the service16:56
jayjoI only would call 'start' if i called 'stop' or something else?16:56
naccjayjo: enable does not start unless you run with --now17:01
jayjohmm, so would I run 'systemctl enable ClusterMaintenance.service' on all nodes (to install) and only 'systemctl start ClusterMaintenance' on the manager nodes?17:02
naccjayjo: sorry, i wasn't reading the other bits, just wanted to point that out17:10
tomreynjayjo: this sounds like it could work for your use case, yes. this doesn't cover the regular activation (timer/cron job) vector you had in mind initially, though.17:11
jayjotomreyn: I think that I will just re-factor the python script to run forever and handle the timing within the script, as long as the system will monitor it and re-run it if it fails17:13
tomreyni assumed you would ;)17:14

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!