[03:25] How can i get my wifi card to configure itself and how can I switch between wifi networks at will? [03:26] yes this is for a server... its a mirror of what i have running at work for our vm host but running on my laptop [03:28] if you can tolerate installing network manager on the thing, nmcli makes swapping wireless networks pretty easy [03:28] have you considered network-manager? [03:59] yes, Id probably prefer network manager..... but ubuntu 18 decided to go its own way and use netplan [04:00] for servers, yes [04:00] desktops still use network-manager [04:00] feel free to swap it in if it works for you [04:01] sarnold: why the split within the same distro? [04:01] Jgalt: because admins would kill us if we put network manager on servers by default [04:01] Jgalt: and users would say ubuntu is hard to use if we had them use netplan for their wifi :) [04:04] so what was it before all this? as an admin I tend to like to kill anyone that goes off on their own way with no one else following. this includes snap, netplan, and likely a few others im not thinking of right now [04:04] on debian it was /etc/network/interfaces [04:05] that was the way it worked on ubuntu server for ages [04:05] I *think* ubuntu was born after network manager and likely always included it [04:05] /etc/network/interfaces wasn't great fun with wifi [04:05] I did it [04:06] but I think I'm willing to begrudgingly admit that today I'd rather use nm than manage my wifi card via /etc/network/interfaces :) [04:06] red hat had some system-configure-network python script or similar [04:06] suse had yast [04:06] that said just before I came on to my current work assignment they chose ubuntu for a couple core servers so I get to manage those until our next upgrade cycle [04:07] I liked /etc/network/interfaces [04:09] yeah, it *was* simple [04:09] ah well, Try and learn netplan i guess [04:09] a bit too simple.. people expected it to maintain some kind of state [04:10] so they'd edit the file to make it look the way they'd want, run ifdown.. and it wouldn't tear down the old thing, because it just runs shell scripts. [04:10] but it *looked* like it was more than shell scripts. [08:44] sarnold: just a nitpicky correction, netplan is used both on servers and desktops, as it's just a configuration abstraction/wrapper. the difference is in the backend it uses, on desktop it's NM and on servers it's networkd. [08:45] and it's just default, nothing prevents users to flip that around or not use it at all. === led_ir23 is now known as led_ir22 [14:09] I still have a production server on Ubuntu 14.04.5 LTS. Has anyone done the upgrade path to Ubuntu 18.04 LTS? [14:10] Or rather, does that seem scary, and filled with several possible errors? Or is it relatively straightfroward and seamless? [14:10] If you're reading in-between the lines: this is a big to do list item and I'm wanting to plan appropriately. :) [14:10] It's a digital ocean droplet right now [14:11] I guess I could easily snapshot stuff and just go for it... worst case, re-write old snapshot [14:12] foo: it should be rather straight forward - make a backup or snapshot, upgrade to 16.04, test a bit, backup/snapshot again, upgrade to 18.04 [14:13] if you have a lot of custom packages and stuff like that, it probably won't be that easy [14:14] it may be wroth considering to do a fresh installation, though, especially if you use OS configuration deployment. [14:14] much has changed [14:14] RoyK: no deb's I've built. Mostly websites, all python/postgres [14:15] tomreyn: yes, that's the other option... spin up new droplet on 18.04... and slowly migrate stuff over. I'm open to that too, considering this droplet is many years old. And, well, of course it just "feels better" - ha. [14:15] Would require changing IPs and cleaning house on a lot of projects... which actually is a bit attractive. [14:15] should be fairly straight orward, but as tomreyn says - a fresh install and ansible or something to set it up might be a good idea [14:16] foo: what sort of services do you have on this thing? [14:17] I'd love some feedback on this: I have a good friend who opened my eyes to docker. I was thinking, instead of one nginx system and various python scripts running, of using docker for every project. Right now I use git locally and push up locally. Difference would be to use docker locally and push up docker containers (still have a lot to learn)... it's a fundamentally different approach, and it sounds a lot cleaner, and security benefits too, and ... [14:17] ... possibly easier to upgrade OS-level stuff too. Do you suggest A) one nginx web server, one web database, various python scripts (eg. as I have now) or do you suggest B) one fairly vanilla system, various docker instances? [14:18] RoyK: thank you for asking and digging, I've been wanting to think through this for a while. I have had many things... php + mysql for many many years, I recently shut down all that stuff and only use python + postgres + various python scripts (eg. one python script powers a chatbot) [14:18] That's mostly it. /me scratches head [14:18] nginx + gunicorn + django sites and a few static sites [14:18] Various cron jobs, often calling python scripts [14:19] sounds like a pretty normal webserver to me [14:19] Yup [14:20] * foo curious if anyone here is a fan of option A or B, too - if familiar with docker setup like this [14:20] foo: I guess setting up a new one may streamline installation a bit - I mean - get a new vm with u1804, setup ssh keys, use ansible (or whatever you prefer) for everything else [14:21] so that next time you need a reinstall, it's done in record time [14:21] RoyK: yeah, I'm leaning towards that. Not familiar with ansible, if I did that, I'd likely rsync some stuff over [14:21] * foo googles ansible [14:21] foo: just keep the old server running until the new one seems good - then switch - keep the old one for a week or so in case you need to go back [14:22] Looks like this system is 4 years old (granted, I've been actively updating it) [14:23] RoyK: thanks, I like it! I also had a very old PHP drupal site on here, with mysql, which was likely very vulnerable... so re-installing is also attractive for this reason [14:23] sounds like it [14:23] RoyK: happen to have any thoughts on new system being A) one nginx instance, one postgres database, everything connects to it (like I currently have it) or B) a bunch of docker instances? [14:23] drupal is rather well known for its bugs [14:25] isolation is good, although I'm not really into docker - still using kvm [14:25] RoyK: aha, thanks! yeah, I left drupal and php a few years ago... python has been fun. :) [14:27] * RoyK also thinks it's a good idea to use postgres over mysql/mariadb, but it seems we already agree on that [14:28] RoyK: :) Really appreciate you sharing your thoughts, thank you! [14:29] np :) [14:31] RoyK: also looks like EOF on 14.04.5 LTS is April 2019, so I technically still have some time. I thought it was last month for some reason [14:32] 5 years for LTS [14:32] and 14.04 was released - guess! in 2014-04 [14:34] :) [14:34] thanks! [14:36] foo: there are several systems like ansible (chef, puppet, cfengine etc etc etc), but I somehow like ansible - it's not perfect, but it doesn't require a client/agent, it all runs over ssh, which is convenient [14:37] RoyK: ohhh, ansible falls in the chef category. I haven't ever had a need for that level of automation, but I hear it's awesome when you're wanting to command an army of systems. Got it! [14:38] then migrating a webserver to a new one would be a nice way to learn one of those tools :) [14:45] RoyK: do those tools, even ansible, make sense for 1 server - though? [14:53] foo: with ansible it really doesn't matter if it's one or a thouosand - you just give it a playbook, referincing a hostfile and there you go [14:53] RoyK: aha, I see [14:54] so when the server dies or you want to host it somewhere else, just setup a new one with the playbook, move relevant data and you're go [14:55] RoyK: I suspect playbook is "run these scripts, install these packages, set these configs" - etc? [14:55] yes [14:55] there are fairly good documentation on https://www.ansible.com/ [14:59] RoyK: thank you! I might just give this a look. It would be nice to have a failover system one day and it sounds like this could help with that [14:59] actually, with digital ocean, I could probably take a snapshot and clone the system and move it to another zone or something... maybe. :) And even if not, having a local dev environment set up same as production... that could be another good use case [15:00] yep [15:00] just write a good playbook, then deploying the machine somewhere else is easy [15:05] RoyK: thank you! [15:05] RoyK: making a note of this and planning this out on calendar [17:46] hello guys its possible to use ufw or iptables to block limit of request per ip at a script file script.sh ? [18:41] Checkmate: what kind of request? [18:42] blackflow http requests [18:43] Checkmate: no, iptables have no concept of a http request. you can only limit at packet level, eg. new connections by limiting SYN packets [18:43] but the web server should be able to do that. nginx and apache can, at least [18:43] blackflow and fail2ban? [18:45] Checkmate: no, the only thing that understands the concept of a "http request" in order to throttle it, is the web server [18:45] Checkmate: perhaps avoid the XY problem and state what exactly are you trying to solve? [18:46] i want block bots [18:47] Checkmate: that's a game of whack-a-mole which you can never win. [18:47] you can also throttle "good" bots with robots.txt [18:54] you're right i can never win only by cloudflare mode i'm under attack [18:57] blackflow iptable or ufw can be used for a specified script.sh ?? [19:02] Checkmate: I have no idea what "specific script.sh" is [19:02] *specified [19:03] btw not sure even cloud flare can help with bots. what problem are you having? excessive traffic? === whislock_ is now known as whislock