[03:09] <JamesBenson> @gbkersey:  Thanks for the info.  Even a 7-8Gbps is better than a 1Gbps.  We are using the ga-16.04-lowlatency kernel for what it's worth
[06:01] <lordievader> Good morning
[07:29] <MrMojit0> I have an fresh Ubuntu machine running and I installed Nagios using the following website: https://kifarunix.com/how-to-install-and-configure-nagios-core-on-ubuntu-18-04/
[07:29] <MrMojit0> Installation seems to be successfully because I can access the website. But now comes to problem, when I reboot the machine it won´t work anymore and I need to follow the complete tutorial again to get it up. But then again after a reboot its not working.
[07:30] <MrMojit0> My Linux experience is zero. So I was hoping if anyone has an idea what could be wrong on that tutorial or how I can get Nagios to work again.
[07:31] <lordievader> Is the Nagios service enabled?
[07:47] <MrMojit0> lordievader: How can I check this on Ubuntu?
[07:48] <lordievader> `systemctl status nagios` (assuming here the nagios service is called that way)
[07:49] <MrMojit0> Active: active (running)
[07:53] <MrMojit0> Just did a check on Apache2 if that is running and that is also good
[07:53] <lordievader> Is it the same after a reboot?
[07:54] <MrMojit0> Let me restart to be sure
[07:55] <MrMojit0> The IP address is also reachable. Maybe it has something to do with the firewall
[07:57] <MrMojit0> Both Nagios and Apache are running successfully after a reboot
[07:58] <MrMojit0> Found the issue! Thank you
[07:58] <MrMojit0> I need to readd the ¨ufw allow apache;ufw reload¨
[07:58] <MrMojit0> How can I make sure those settings will stay stored and not be gone after a reboot?
[08:01] <lordievader> I think ufw takes care of that. Haven't used ufw in ages. Dislike the way it does things.
[08:02] <MrMojit0> I am just rebooting the machine again to see if ufw is enabled or not
[08:14] <MrMojit0> It is enabled, but still need to add the rules. I will look how to make those rules mandatory
[13:23] <gbkersey> JamesBenson: thanks for that....
[14:05]  * foo attempts to figure out what is causing oom to murder processes
[14:05]  * foo reads https://serverfault.com/questions/134669/how-to-diagnose-causes-of-oom-killer-killing-processes
[14:24] <foo> I wonder if oom-killer can be too aggressive? What's strange is I rarely see this system swapping
[14:39] <vahnx> Hi all, I'm looking to setup a ticketing system on 18.04 LTS and looking for recommendations. I was using Spiceworks on Windows but have since moved to Linux and do not have a Windows license.
[14:39] <Ussat> ticketing system, like help desk thing /
[14:40] <vahnx> Something that I will mainly use, maybe 1 or 2 more users. Yeah for tech support.
[14:40] <cryptodan> vahnx: look at osticket
[14:40] <vahnx> Ok thanks, will do!
[14:40] <cryptodan> vahnx: https://osticket.com/
[14:41] <codefriar> any Traefik experts here?
[15:09] <admin0> hi .. my server / is 100% full, but i am not able to see what is causing it ..    / is 80G ..  du -sh /* grep G does not even come near 80G
[15:10] <admin0> is it possible it can be something that is in memory or an open file handler . and if such is there, how do I find it out ?
[15:11] <codefriar> admin0 I once ran out of inodes, and it showed as full
[15:11] <admin0> inodes is only 4% used
[15:12] <admin0> df -h =>  /dev/mapper/cloud-root   75G   75G     0 100% /   |  df -i => /dev/mapper/cloud-root  5005312 174582  4830730    4% /
[15:14] <leftyfb> admin0: cd / ; sudo du -hs .[^.]*
[15:14] <leftyfb> admin0: that'll run against any hidden directories
[15:15] <leftyfb> admin0: cd / ; sudo du -hs .[^.]* * |grep G # this will run on everything
[15:15] <leftyfb> admin0: once you get some space, I recommend using ncdu
[15:15] <whislock> admin0: If a process is holding a file open, that space will still be consumed for filesystem allocation purposes until the process releases the lock.
[15:15] <admin0> 9.0G .
[15:15] <admin0> that is what i get
[15:15] <admin0> but df is 100% full
[15:16] <admin0> wishlock , how do I locate such process or such file
[15:16] <leftyfb> admin0: can you pastebin exactly what commands you are running and the output please?
[15:16] <admin0> sure
[15:17] <admin0> sure .. one moement
[15:19] <admin0> whislock, thanks for the pointer .. a cron was rm -rf a file while the process was not stopped
[15:19] <admin0> rebooting that process ( libvirtd) cleared up the space
[15:20] <admin0> instead of rm -rf the file, will do cp /dev/null instead
[15:35] <admin0> thanks guys for helping
[15:36]  * admin0 sends pizza (virtual) to leftyfb and whislock :D
[15:36] <leftyfb> admin0: future reference, install ncdu
[17:30] <DammitJim> how can I figure out what blocked processes I have on a server?
[17:31] <DammitJim> my monitoring system is telling me I have on average 5 blocked processes, but I don't see a D in the S column on top
[17:34] <tomreyn> is uninterruptable sleep what your monitoring system means by "blocked process", though?
[17:38] <DammitJim> tomreyn, good question... not sure
[17:39] <DammitJim> ndicates the number of processes blocked for I/O, paging, etc.
[17:42] <tomreyn> hmm yes, sounds like it should be that
[17:43] <sarnold> DammitJim: procfs(5) /proc/pid/syscall sounds vaguely enough like a blocked vs not-blocked measure for such a tool
[17:43] <sarnold> it'd be a bit silly to open, read, and close, a few thousand files for this information every N seconds of course, but maybe that's what it's doing
[17:43] <DammitJim> hhmmmm
[17:44] <tomreyn> if its source code is available to you, you could inspect what it actually does.
[17:46] <DammitJim> so, how do I get the process that is blocked?
[17:47] <sarnold> well, the thing with these kinds of measurements, is that it's all very transitory and racy
[17:47] <DammitJim> so, hard to "catch?"
[17:48] <sarnold> after all it takes ~20ms to handle a read IO operation from a spinning metal hard drive, by the time top or similar tool has crawled through all the processes on the system, the information it has on a process is likely already out of date
[17:48] <DammitJim> oh yeah, here I"m talking about an all flash array
[17:49] <DammitJim> and the blocked process stats from the monitoring system are reported every 5 minutes and I had this "problem" for about an hour
[17:49] <tomreyn> according to google your quote's source is https://docs.eginnovations.com/Unix_and_Windows_Servers/System_Details_Test_1.htm
[17:49] <DammitJim> yes
[17:50] <DammitJim> I'm on hold with them asking them what they are actually polling
[17:50] <tomreyn> no source code there, i assume.
[17:51] <sarnold> you could perf trace or strace the thing. it'd be drinking from a firehose though
[17:56] <DammitJim> yikes
[17:59] <dlloyd> you can filter to specific sysclals with strace
[18:01] <foo> When oom starts killing stuff, per syslog, it's not always clear what that is, correct?
[18:02] <sarnold> hmm? I'm accustomed to seeing it saying which process it killed
[18:03] <tomreyn> both pid and process name should be listed
[18:03] <sarnold> of course if it kills X11 and then all your X clients *also* die because the other end of their socket went away, that might feel a lot like the oom killer not reporting what died .. when really, it was just responsible for one process going away
[18:04] <foo> tomreyn / sarnold - thanks, but that's not *always* the process that is consuming the memory right? eg. X can consume a ton of memory, Y will get killed off a result, correct? Or am I misunderstanding?
[18:05] <sarnold> foo: yeah, there's also some per-process scoring involved; and depending upon how much memory is shared among processes, killing "huge" ones may not actually free up much memory
[18:06] <foo> sarnold: ok, so whatever gets killed is not always the culprit. eg. I've seen a ton of different things killed off now that I think about it
[18:06] <foo> System runs nginx, postgres and a few python scripts. Attempting to figure out what is causing this
[18:07] <sarnold> yeah, the kernel tries to balance (a) killing something quickly (b) killing as little as possible (c) while also still getting as much memory for the pain
[18:07] <tomreyn> the journal will report which process was killed. processes which depend on this process may also fail as a result, and wont be listed individually as part of the oom kill record..
[18:08] <foo> sarnold: thanks
[18:08] <foo> nginx looks ok, checking postgres right now too.
[18:08] <tomreyn> you can actually influence the kernels' decision making a little. but, much more reasonably, you don't want the OOM killing to happen in the first place.
[18:09] <foo> Also going to enable query logs for slower queries
[18:09] <foo> ./postgresqltuner.pl says [URGENT] set vm.overcommit_memory=2 in /etc/sysctl.conf and run sysctl -p to reload it. This will disable memory overcommitment and avoid postgresql killed by OOM killer. - I've been tracking down a memory issue with something, not sure what it is. Are we in agreement this is suggested? I assume it is but thought I'd ask
[18:10] <tomreyn> first identify which of the processes allocated more memory than they should have according to your planning, then try to see how to tune them.
[18:10] <tomreyn> if you start increasing debugging / verbosity now you already change their resource allocation
[18:12] <foo> tomreyn: "first identify which of the processes allocated more memory" - I can only do this by checking conf files, right? Is there another way?
[18:12] <tomreyn> montoring
[18:16] <foo> tomreyn: you have suggested tools? It's so sporadic, I haven't been able to narrow it down. Running a top and sysstat and what not now
[18:16] <tomreyn> you run some services on your server. ideally as few as possible, and move others to separate servers (or VMs). you think about how much memory you want each of them (as well as the OS itself) to consume, and calculate the total memory allocation. you configure services to allocate only the amount of memory you want them to allocate (which is not always possible, but it often is more or less possible, especially with DB servers).
[18:18] <tomreyn> and you do monitoring in short enough intervals to determine what may have consumed more memory than planned. and when this happens you review its logs (maybe increase verbosity), configuration, do the tuning.
[18:18] <foo> tomreyn: yeah, I thought about splitting things about a bit more... namely moving postgres onto it's own system. Right now postgres + nginx + various python scripts all on one server... and thus fine-tuning isn't an exact science since each fluctuates
[18:18] <tomreyn> right, DB servers should always be run just by themselves IMO.
[18:20] <tomreyn> postgresql is actually quite configurable in terms of memory allocation, nginx also, but there i find it not to be so plannable.
[18:21] <sarnold> the downside to running databases on different servers is that can add milliseconds to latency. that's probably better than minutes of latency if the oom killer has decided your database is a hog :) but still, something to keep in mind
[18:22] <tomreyn> so can a lot of other factors, yes.
[18:22] <foo> tomreyn / sarnold - yeah, I'm not opposed to that. Would definitely help control resources better
[18:23] <sdeziel> tomreyn: do you recommend to always separate the DB backend from the web frontend for security? performance? upgradability? all those?
[18:29] <foo> I know amazon has RDS. I wonder if Digital Ocean has something.
[18:29] <foo> Does anyone have any commentary on this suggestion: [URGENT] set vm.overcommit_memory=2 in /etc/sysctl.conf and run sysctl -p to reload it. This will disable memory overcommitment and avoid postgresql killed by OOM killer.
[18:30] <sarnold> foo: in isolation, I don't like the suggestion. if, after doing the analysis tomreyn suggested, you may realize it makes sense or it may not make sense
[18:31] <sarnold> foo: yes, that should drastically reduce the chances of hitting OOM, but it might also make the machine nearly unusable.
[18:34] <foo> sarnold: thank you. Part of my challenge is little to nothing meaningful has changed in the past month that I can see. I'm almost wondering if some library had some API change and there's some obscure threading issue due to some change which is causing some resource issue... but meh, OOM killed stuff once in feb, once in march, and 4 times his month (already). Traffic all looks nearly the same
[18:35] <sarnold> foo: that sounds a lot like the machine just isn't sized correctly for the workload
[18:37] <foo> sarnold: thank you. it's been online for 3 months. It was a recent migration from ubuntu 14.04 to 18.04. Not much has changed in the past few months but nonetheless, I agree something isn't tuned properly. I don't think gunicorn can be tuned, leaving nginx + postgres, namely. Django also runs on here.
[18:37] <BrianBlaze> how do I get an older version of mysql? everytime I try to install a deb it tells me dependency issues and install -f just gives me the latest version
[18:38] <BrianBlaze> wondering if anyone can poitn me in a direction :)
[18:38] <sarnold> BrianBlaze: can you pastebin the whole thing? (pastebinit package has an easy pastbinit tool that can help this)
[18:38]  * foo sets up pg_stat_statements
[18:39] <BrianBlaze> https://pastebin.com/gEH5Li2i
[18:40] <sarnold> why do you want to install that specific version?
[18:40] <sarnold> where did you get it?
[18:40] <BrianBlaze> because this app needs mysql version between 5.5 and 5.2.24
[18:40] <BrianBlaze> sorry 5.7.24
[18:41] <sarnold> does 5.7.25 break something? or does their documentation just not know about 5.7.25 yet?
[18:41] <BrianBlaze> when I go through the install it tells me it won't work with the newest version of MYSQL and won't let me go farther
[18:41] <BrianBlaze> so yeah the latter sarnold
[18:41] <sarnold> ew
[18:41] <sarnold> alright then
[18:41] <sarnold> do you have any data in the database that you care about?
[18:42] <BrianBlaze> nah this is a fresh install
[18:42] <BrianBlaze> basically we use orangeHRM at work
[18:42] <BrianBlaze> open surce
[18:42] <BrianBlaze> and I am trying to go to the latest version
[18:42] <BrianBlaze> I will worry about getting the data there after
[18:43] <sarnold> alright, cool. I think you'd be best served by apt-get purge mysql-server  -- maybe you'll need to purge other mysql packages while you're at it -- and download the 5.7.24 packages from https://launchpad.net/ubuntu/+source/mysql-5.7/5.7.24-0ubuntu0.18.04.1
[18:45] <BrianBlaze> thanks so much I will give it a shot
[18:52] <tomreyn> sdeziel: not always, not necessarily for a small test / dev / hobby project. but for anything 'serious', yes.
[18:53] <sdeziel> tomreyn: OK. I myself usually put it on the same machine to remove the network from potential source of failure. I also think that since the web app has the DB password, security-wise it isn't much worse
[18:54] <sdeziel> tomreyn: but for a bigger deployment, I guess you are right it's best to separate them
[19:02] <tomreyn> sdeziel: sure, networking is always a possible hazard (still but not neccessarily as much in a more controlled environment than the Internet), and there is latency, as sarnold mentioned. but if you run a webserver on the same system as a database server, it already rules out a serious HA setup. (definitely but not neccessarily only) if there's server side scripting involed on the webserver it also also means you're adding additional
[19:02] <tomreyn> attack vectors against a local vs remote database server (vectors and attacks which involve the local (e.g. file) system, such as remote file include, privilege escalation, directory traversal).
[19:03] <sdeziel> tomreyn: right, good point. It's harder to secure when both are on the same machine
[19:03] <sarnold> BrianBlaze: don't forget to dpkg hold the mysql packages to prevent security updates from replacing the specific versions you're installing
[19:05] <sarnold> BrianBlaze: apt-mark(8) can do that
[19:05] <sdeziel> tomreyn: that said, the only valuable thing on the DB server is usually the DB itself
[19:05] <BrianBlaze> how true
[19:05] <BrianBlaze> thanks
[19:07] <tomreyn> sdeziel: which is the big secret trove, the crown jewels, though, right? surely not always, but in many cases DB leaks are worse than, say, application code leaks (though those can be very bad, too, exposing malpractive, dodgy policies which carried into code)
[19:07] <sdeziel> tomreyn: agreed but since the web app already has access to the DB...
[19:08] <tomreyn> sdeziel: database user access, yes, not file system access
[19:08] <tomreyn> those are very different
[19:08] <sdeziel> tomreyn: that's probably what I fail to understan
[19:09] <sdeziel> mind elaborating a little on the security implications?
[19:09] <tomreyn> if you can "select into outfile" on a backend DB server but have no means to access the data it stored into a file that is now local to the DB server, such as thorugh a remote file include attacks against PHP, then this attack vector doesn't help you at all.
[19:11] <sdeziel> and such case, the source of the select would have to be something else than the DB itself, is that even possible?
[19:11] <BrianBlaze> I really appreciate the input sarnold I am on my way :)
[19:11] <sdeziel> (I know very little about DBs... just enough to drop a table/DB ;) )
[19:11] <sarnold> BrianBlaze: great! :) have fun
[19:12] <sarnold> little sdeziel tables :)
[19:12] <sdeziel> hehe
[19:13] <sdeziel> https://www.xkcd.com/327/
[19:13] <tomreyn> :)
[19:15] <tomreyn> sdeziel: so imagine this scenario: there is a php application running on the weserver which is both vulnerable to remote file includes and SQL injection, and you have a mysql server as the backend. and the SQL injection is limited in that the application prevents it almost except that you can still run INTO OUTFILE sql queries successfully, where mysql qould write the result of a query into a file on the local file system.
[19:16] <sdeziel> tomreyn: so far I understand from the above that you could extract stuff the mysql user has access to.
[19:16] <tomreyn> sdeziel: in this scenario, if the DB server runs on the same system as the vulnerable web application, you can access this file via remote file include. not so if the database server runs on a different system and wrote the file on this systems' file system but not that of the web application
[19:17] <sdeziel> tomreyn: I (think I) understand that part but what I fail to understand is how would that be a bigger threat than leaking the full DB the web app has access to anyways?
[19:19] <tomreyn> sdeziel: it is only marginally greater. but in the scenario discussed, you can't make the web application leak the full DB its DB user has access to by any other means.
[19:19] <tomreyn> normally web applications are not meant to just read the full DB and dump it to the internet ;-)
[19:20] <tomreyn> we'Re well beyond the scope of this channel by the way. if anyone thinks we should move elsewhere please say so.
[19:20] <sdeziel> I'm not worried about the normal case ;) but I'd assume someone with SQL injection and interested in the DB data would simply leak it without the intermediate file
[19:20] <sdeziel> yeah, that's OT, sorry
[19:22] <tomreyn> sdeziel: it's all a matter of what the attacker can control. if they can just run any SQL statement they like against the database within the scope of the web applications' database, then surely that means they can dump it.
[19:23] <tomreyn> the scenario i meant to describe only allows the attacker very limited control over how sql statements can be modified.
[19:25] <tomreyn> tv time now, but we can talk later in #ubuntu-offtopic or elsewhere, just ping me.
[19:25] <sdeziel> tomreyn: thanks
[19:25] <sarnold> thanks for the discussion, it's been fun reading
[19:27] <tomreyn> :) and fun for me learning to understand how i can express myself better, and not mixing up the proper terms so much. i bet sarnold would have explained it much better. ;-)
[19:27] <sarnold> I wouldn't be so sure of that -- actually *using* computers isn't my forte :)
[19:28] <sdeziel> once again, I get to the conclusion I should learn more stuff to better understand things..
[19:28] <sarnold> heh, yes :)
[20:44] <JamesBenson> @gbkersey: FYI: Linux 4.4.0-145-lowlatency #171-Ubuntu SMP PREEMPT Tue Mar 26 13:17:00 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[21:26] <gbkersey> JamesBenson: any luck with the 10Gb ?
[21:29] <teward> ehehehehehehehehe i feel privileged... xD
[21:29] <teward> I have TWO cable hookups here xD
[21:33] <teward> sarnold: mind helping me test something?