[00:16] hi. i have a 16.04 computer that includes an nfs mount in fstab. sometimes, the network sucks, and during boot, the share fails to mount. there is a long, long, timeout when this happens. how can i change this timeout? [01:16] Hi, I just installed a new server and I am trying to figure out the best way I can find a directory via the ip address. For example I setup siteabc.com, but I need to access it via the ip address (I havent set up dns yet). I have several sites on the server. Any idea. [01:17] THanks [01:45] pmp6nl: you can temporarily(!) add the domain to your local /etc/hosts [05:44] What command can be used to reconfigure network after an install and bring eth0 up for dhcp req? [07:49] eagleeyes, not sure, but you can manually edit the config file [07:49] https://help.ubuntu.com/lts/serverguide/network-configuration.html [07:50] nano /etc/network/interfaces [07:50] I did it manually but can't ping out. [07:50] then add "auto eth0" and "iface eth0 inet dhcp" [07:51] did you reset the interfaces? [07:51] well, I guess, where are you in the "doing it manually" process? [07:51] do you have an IP via DHCP? Or you just configured the interface/edited the file? === m1dnight1 is now known as m1dnight_ [09:33] hello [11:03] I just disabled ipv6 on my ubuntu xenial server from systemctl and restarted my networking service. I also reloaded systemctl from default file(systemctl -p) but netstat shows that services are still listening on ipv6. How do I stop them? [11:03] pterodactyl: why [11:04] I'm not using ipv6 at this time and open ports just scares me no matter even if the service on it is not vulnerable. So just want to stop them. [11:06] pterodactyl: Those services netstat says are listening on IPv6, is that "::" showing up? [11:06] yup [11:07] pterodactyl: That being what most services uses to combine all-ipv4 and all-ipv6 by default. Unless you actually have an (global) IPv6 address configured, that doesn't matter. [11:07] weird to disable ipv6 too, it's the future of the internet [11:08] pterodactyl: If that is still something you want to fix for some odd reason then that has to be fixed on an application basis, by configuring them to explicit bind to 0.0.0.0 or something. [11:08] pterodactyl: Yet, if you want an extra safety belt it might be easier just to use ip6tables to REJECT all. [11:09] Aside from that I agree with Ben64. [11:09] Why not simple also use IPv6? Why would you assume that your applications would be more vulnerable across IPv6 than across IPv4? [11:11] andol : It's not that I assume so. Looking at open ports just give me a creepy feeling. So I thought I'd just disable them. Anyway I guess I'll just leave them running. [11:12] but the same ports are open on ipv4... [11:12] pterodactyl: Or you could find some other way to deal with that anxiety, and get on the IPv6 train? :) [11:13] toot toot, here comes the cure for running out of IPs [11:13] andol and Ben64 : Guess I have to agree with you guys. [11:14] Good :-) [11:25] Thanks guys. :) [11:25] Gotta go. === JanC is now known as Guest35633 === JanC_ is now known as JanC === nick1 is now known as Guest37389 [21:28] Hi! Does anyone know how to set the hard "file open limit" systemwide? I have mongo that is using almost a million open files ... stupid mongo. and it is crashing because the limit is too low, but whatever i try to raise it has no effect [21:34] m0ltar: yes. [21:36] Walex: good [21:37] m0ltar: b ut it is not necessary. 'ulimit' values are inherited like evn vars. [21:37] m0ltar: so just set in the script that start Mongo. [21:37] Walex: system uses systemd [21:37] which had the upper limit set to 1 millin [21:37] m0ltar: BTW Mongo should not be opening a million files or something like that. That probably needs fixing. [21:38] yet mongod still didnt start [21:38] Walex: yes, agreed. But cannot fix it all at once. [21:38] For now I need to get it running [21:38] Then delete some collections [21:38] m0ltar: there is also a systemwide limit I think but not sure. [21:38] Walex: sure, and I set it high, yet still no cigar [21:39] m0ltar: you can check the actual limit with 'cat /proc/$PID/limits' [21:39] It's set to 65536 [21:39] m0ltar: or after you set 'ulimit -n ....' use 'ulimit -H -a' [21:40] Walex: yeah I did all that [21:40] man ulimit [21:40] $pid/limits shows 65536, yet mongo actually has almost million files open [21:40] verified via "lsof | grep mongod | wc -l" [21:40] but i'm running it as root now, so maybe that does not pay attention to limits [21:40] m0ltar: that is unlikely [21:41] it works as root, but dies as mongodb [21:41] lsof | grep mongod | wc -l -- now reports 1188809 [21:41] crazyness [21:41] m0ltar: 'root' can *raise* limits, but not ignore them [21:41] ok well, i have no other explanations :) [21:42] cat /proc/2690/limits | grep "Max open files" --- Max open files 65536 65536 files [21:42] ulimit -n -H ---- 65536 [21:42] lsof | grep mongod | wc -l --- 1188809 [21:42] to check again 'ls /proc/$PID/fd/ | wc -l [21:42] to check again 'ls /proc/$PID/fd/ | wc -l' [21:43] to check again 'ls /proc/$PID/fd/ | wc -l' gives you number of file descriptors [21:44] /fd count is 2107 [21:45] m0ltar: note that 'lsof' also lists mapped segments [21:45] ah ok [21:45] so actual files open is 2107? then why the hell does mongo complain about not having enough [21:45] m0ltar: probably MongoDB has got 1 million 'malloc' segments [21:46] UnknownError: 24: Too many open files [21:46] m0ltar: sometimes error codes are reused for "similar" situations [21:46] m0ltar: it is likely that it has run out of memory mappings [21:46] hm [21:47] when running under root it does seem to use a lot of memory -- in fact all of it [21:47] and half of used memory also is as swap form [21:47] m0ltar: I haven't used Mongo for a while, but I remember that there was an issue related to that, perhaps the log [21:47] dammit things with mongo are worse than I thought [21:48] m0ltar: Mongo is not very "reliable" let's say./ [21:48] i agree, and we are moving away from it very fast [21:48] just havent really done it fully yet [21:48] m0ltar: also it is subject to a rolling-release model, so bug fixes usually don't get backported [21:49] I was installing from their own repo [21:49] latest version [21:49] m0ltar: I can look at my notes if you will around to see what kind of similar issue I was having. [21:49] Walex: that would be great! Thanks [21:49] m0ltar: in my case it was an old iussue, but I'll have a look. Not sure I put down a note. [21:50] I'm surprised it works under root user though [21:50] if it doesnt have enough mem, then how come it still works [21:50] must be some kind of limit too [21:51] ulimit output is the same for root and mongodb user [21:53] m0ltar: in the meantime try 'lsof -p $PID | less' to see what is being mapped 1 million times [21:53] I did do that, but there are a million records, so how do you even review that :D [21:54] m0ltar: you just scroll a bit, and if there are a million entries probably they are all similar [21:55] Actually that output only gives me 2129 lines [21:55] but if I don't include the pid and grep for mongod in the output, i get millions of lines' [21:55] m0ltar: then it is something else... [21:56] m0ltar: lsof | sed 's/ .*//' | sort | uniq -c | sort -n | tail -20 [21:58] it seems like it is spawning too many children [21:59] m0ltar: then probably the children die and it loops [21:59] i'm scrolling through the list manually and i see the same files over and over, and the PID in the third column goes up by [21:59] 1 [21:59] m0ltar: tryu to 'strace' one of them. [22:00] dont really know how to strace [22:01] i attached to the process, but then don't know what to do [22:01] m0ltar: ahhh I remember what was my issue, that the DB files grew enormously because of many transactioons. [22:01] m0ltar: you can run something like 'strace -p $PID' and see what comes out [22:01] Process 2691 attached [22:01] rt_sigtimedwait([HUP INT USR1 TERM XCPU], NULL, NULL, 8 [22:02] that's all that came out [22:02] but that process is not even live [22:02] m0ltar: uhhhh that means they are hanging. [22:02] if i do ps aux there is no process with that pid [22:02] m0ltar: it is a thread most likely [22:02] yeah [22:02] so every thread then holds a file open [22:03] m0ltar: try 'pmap -p 2691' [22:03] or are file handles inherited ? [22:03] m0ltar: file handles are inherited and shared. [22:03] ok that gave me a lot of output [22:03] ok then counting lsof output is irrelevant [22:04] our db is def huge though... its 311 GB now :/ [22:04] m0ltar: have you looked at 'dmesg | trail -50' [22:04] m0ltar: have you looked at 'dmesg | tail -50' [22:04] m0ltar: also at 'tail -50 /var/log/syslog' [22:04] dmesg tail gives me lots of info [22:05] but no idea what it all means [22:05] m0ltar: if there is a resource shortage it should affects other daemons [22:05] there are no other daemons [22:05] m0ltar: don't joke... [22:06] :D [22:06] well no other meaningful ones [22:06] m0ltar: also just to be sure look at /proc/meminfo and the top lines of 'slabtop' [22:07] m0ltar: the non-meaningful ones are likely to be affected too if there is something badly broken [22:08] you wont believe this... [22:08] this is so fucking idiotic [22:09] this boggles my mind [22:09] the service file that came with mongo (/lib/systemd/system/mongod.service) [22:09] had limits set [22:09] It read: LimitNOFILE=64000 # number of open files [22:10] syslog said [22:10] limits ignored "64000 # number of open files" is not a valid value... [22:10] m0ltar: yes, but that should be plenty [22:10] W.T.F. [22:10] it was treating the whole thing as a limit with the comment [22:10] ahhhh funny [22:10] so questions arise [22:10] why systemd is not parsing out comments [22:10] and 2) why mongod is shipped with shitty systemd file [22:11] in fact, i think systemd unit files are ini files, and you are supposed to use ";" for cmments [22:11] possibly # also works, dont know, but i've been always using ; because of windows days & ini files ;) [22:11] m0ltar: the problem currently is understanding why there are very many MongoDB threads. [22:11] well, it is kind of expected, because its heavily used [22:11] mutiple servers connect inside multiple threads [22:12] m0ltar: try 'top' and then "H" to show threads [22:13] 1-5 mongod processes [22:13] m0ltar: there are must a lot of threads, scroll down until you see them [22:13] oh ya there are quite a bit [22:13] m0ltar: they are all mongod [22:14] yeah [22:14] or something else? [22:14] nah mostly mongod [22:15] m0ltar: I just did a web search for the string "UnknownError: 24: Too many open files [22:15] m0ltar: there are some entries [22:15] i did search that too [22:15] nothing good came out of it :) [22:16] basically my understanding is that mongod in the latest version keeps 2 FH for each collection [22:16] we have 2000+ collections, so its 4000 FH [22:16] but the threads... i am not sure. [22:16] most likely it's a thread per connection [22:17] it currently has 469 connections [22:17] so that kind of adds up [22:19] m0ltar: 469 connections is not that huge. [22:19] sure, but I think it spawns a new thread per connections, that's why there are so many threads [22:19] by scrolling thru top it does look like ~ 500 threads [22:19] so it kinda makes sense [22:19] m0ltar: yes, but 469 connections should not be causing resource issues like too many FDs open [22:20] Walex: I think the issue was that because systemd.service file setting was simply ignored, it was falling back to some default setting [22:20] m0ltar: using KDE etc. on my laptop I have 350 processes... [22:20] which was low I am guessing. Altho ulimit was reporting 64k [22:21] or worse, maybe, because systemd setting was bad, it was defaulted to "null" or 0 [22:21] and thereby even 1 FD would be too many [22:22] surprisingly mongod has no "issues" tab https://github.com/mongodb/mongo [22:22] weird [22:22] oh remember, they were using jira [22:22] weirdos :/ [22:22] m0ltar: the default is IIRC 1024 [22:22] Walex, well, that would definitely not be enough [22:22] because i read it uses 2 FD per collection [22:23] and we have 2000+ collections [22:23] m0ltar: that's fairly brave... [22:23] Walex, buddy, you were so helpful. Thanks! Give me your Paypal address I will send you some beer money :D [22:23] m0ltar: don't worry. [22:23] Or bitcoin or whatever, altho I think I don't have many coins left in the wallet [22:23] I am not worried... just want to share the love the only way I can virtually [22:24] m0ltar: so 500 threads maybe each with 2,000 memory mapping, you get 1,000,000 mappings listed by 'lsof' [22:24] if you have an amazon wish list, let me know i'll buy you something nice [22:24] yeah it about adds up doesnt it? [22:24] fuck, what was supposed to be a simple *minor* upgrade turned out to some stressful shit hahaha [22:26] m0ltar: been there many times... I had to maintain an old versions of MongoDB used for Juju for OpenStack, and the whole was quite unreliable, lots of race conditions, had to hand-edit the collections a few times. [22:27] m0ltar: BTW the manual page on limits and mongodb does not have the comment error: https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings [22:27] m0ltar: it shows for 'upstart' the comment same line, but for 'systemd' the comments are all previous line. [22:28] m0ltar: the people at Mongo that packaged it did not read their own docs :-) [22:28] Walex: this was actually out of the repo! [22:28] https://github.com/mongodb/mongo/commit/906a6f057f87fb4e51c4a698d9d6fe490fb293a2#diff-53ff8b2b2fd0259e92e2de365e2c4e27 [22:28] I found the broken commit and commented on it [22:28] Not gonan open a jira account for this === van778 is now known as van777