[00:44] JamesBenson: I'm using BCM57711 10-Gigabit in quite a few R710/R610 all running Ubuntu 16.04 with no problems. [05:11] is there a way to control when the system does "apt-get update" automatically? [06:36] Good morning === jelly-home is now known as jelly [15:34] oskie: well, it generally doesn't [15:35] I'm having memory issues that have occured 3 times in the past month. It would seem something is going on... the system mainly has nginx + gunicorn + various python scripts + postgres. I suspect nginx and postgres tuning for system specs is a good place to start, agreed? [15:36] foo: descirbe "memory issues"? [15:44] foo: *usually* I'd be looking at gunicorn or the python scripts as your culprits, but describe what you mean by "memory issues"? [15:46] nacc: I can see MemoryError getting thrown in python. Although, upon further inspection this time... I see various issues: 2019-04-10 02:02:39,956 connectionpool 13279 - WARNING - Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError( ... socket.gaierror: [Errno -3] Temporary failure in name resolution ... OSError: [Errno 101] Network is [15:46] unreachable ... MemoryError ... hmmm. [15:46] I'm beginning to blame python scripts and something it uses. Thanks teward [15:47] There's an optimizing change we can make that I've been meaning to make. Now might be a good time. Also, someone suggested installing sysstat and have it run every minute in cron [15:47] foo: unreachable means a network problem resolutoin failures are DNS< and MemoryError from *python* means your Python Scripts / gunicorn backend are consuming memory [15:47] nginx just hands stuff off to gunicorn, and PostgreSQL doesn't really take up *that* much memory depending on what DB commands you're running [15:47] teward: two separate things, right? What's strange is this both happened at the same time [15:47] teward: yeah, relatively small data set too. [15:50] gunicorn is not 'light' by the way for running Python things, so it's entirely possible that that needs tuned better. [15:51] but yeah you'll be looking at your python stuff and your unicorn backend for the memory errors [15:51] such events can be related, you'll need to work out which one occurred first. maybe the network link was lost and a process which depends on it was spawned many times, consuming more memory than it would if the network link had been there. [15:51] since its likely using all your system resources [15:51] also what tomreyn said [15:51] tomreyn: thank you, I'm thinking something like that happened [15:52] rad, appreciate it ya'll! [15:52] this is just a theory i just magically brought up out of a magicians hat, so dont rely on that to be what happened. check + compare timestamps in logs, and, yes, do what teward said ;) [15:57] tomreyn: nope, i see u as my god. /me bows [15:58] tomreyn: thanks ;) [16:00] that's a bad combo, since the only thing i believe in is the existence of aliens (and i don't mean rpm) [16:02] tomreyn: the challenge with that is we'll never see aliens. I mean, would you visit our solar system if we only had 1 star? Maybe they have another rating system [16:02] * tomreyn chuckles [16:03] Hi - I am planning to install ubuntu-server in a restricted network - where outgoing traffic is restricted- in order to connect to apt (gb.archive + security) what ip's do I need to whitelist for? Also do I need to enable the port for GPG ? [16:09] i.e is it just a case of doing a nslookup on gb.archive.ubuntu.com + security.ubuntu.com and whitelist those ips ? [16:10] yossarianuk: i don't think these A records are guaranteed to be static [16:11] yossarianuk: it's a good use case for an HTTP proxy [16:11] i.e. you'll need a local proxy of sorts, which is allowed to connect to * firewall-wise. or, if this is not acceptable, work based off point release ISOs [16:11] tomreyn: thanks [16:12] yossarianuk: i didn't understand the GPG port question [16:13] GPG, as in GNU Privacy Gueard, does not run as a daemon which listens on the Internet [16:14] hmm I thought you needed access to a port [16:14] its the HKP port [16:14] https://superuser.com/questions/64922/how-to-work-around-blocked-outbound-hkp-port-for-apt-keys [16:15] yossarianuk: That's only if you are wanting to access a key server. [16:15] if you'll use utilities such as apt-add-repository which look up apt repository signing keys automatically when a PPA is added, yes. otherwise, i don't think so. [16:15] I thought that is what apt did by default ? [16:15] ok thanks [16:16] ubuntu's archive server apt gpg signing keys are packaged, so there should be no need to look those up. either way, you could deploy apt signing keys manually to your systems. [16:16] see /etc/apt/trusted.gpg.d/ [16:17] also "apt-key list" [17:45] I have ideas... but... what do you see? Looks like something is pegging CPU and RAM, agree? [17:45] uptime load average: 1.77, 3.88, 2.42 [17:58] a load average doesn't mean too much in isolation [17:58] I've seen nearly idle machines with a load average of ~32 and machines doing strenuous work with load average of ~2 [17:59] instead use top or htop or similar to see which processes are using cpu; vmstat 1 to see bi and bo, si and so columns, to see how much disk io and swap io there is [17:59] that'll give you much better indicator of what the machine is doing [17:59] sarnold: oh, whoops, I forgot to link it, heh... https://paste.ofcode.org/p8dTGJX5AKZYzAtJsgAD9t [18:00] hey there we go! :) [18:01] sadly I odn't know this tool.. can you scrape the si and so columns? [18:02] sarnold: sure. [18:02] * foo checks vmstat [18:03] sarnold: https://paste.ofcode.org/39eLF9awfa9qpXJx4RagY2Q [18:04] foo: cool, thanks; minimal disk IO, no swapping, cpu spending a lot of time idle. it feels like a lightly-loaded network server to me. how'd I do? :) [18:07] sarnold: ha! thank you :) In that case, I'll pay less attention to https://paste.ofcode.org/Vtqr9abPzHsWbELp5zJfLS - eg. %memused and %commit seemed high (they're red when viewed in terminal) [18:07] sarnold: I also just rolled in a fix so that may have helped. Will keep an eye on it, thanks for the two cents! I've made notes of this, too. Been a bit rusty with troubleshooting... glad to keep notes with this now [18:08] foo: a friend once said "unused memory is wasted memory" :) [18:08] foo: hold on a sec.. [18:09] foo: there's weeks of excellent reading on http://www.brendangregg.com/linuxperf.html and linked pages [18:09] I wish I had friends :( [18:09] sarnold: thanks! [18:09] compdoc: aww :( [18:09] lol [18:13] when I have a nic on 2 hosts crossconnected, make a bridge for it, attach it to a VM to that bridhe, create a nic in both VM's within the same subnets, shouldn't they be able to ping ? [18:14] gislaved: in theory yes. Make sure you don't have br_netfilter loaded on any of the hosts though [18:14] sdeziel as far as I see that is not the issue, it are two vyos vm's on a proxmox box and they cannot ping eachother [18:15] 2 proxmox boxes [18:15] gislaved: I don't know proxmox but surely tcpdump should give you some visibility inside those bridges [18:16] that is possible indeed :) [21:10] JamesBenson: I'm using BCM57711 10-Gigabit in quite a few R710/R610 all running Ubuntu 16.04 with no problems. [21:11] gbkersey: thanks, any special drivers? [21:11] or just install ubuntu 16 and go? [21:11] stock..... [21:11] what kernel do you use? [21:12] currently Linux palm 4.4.0-142-generic #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux [21:13] using default params with bnx2x module [21:14] this is a filesystem network for a vm hosting cluster - we're primarily doing drbd over these connections using jumbo packets [21:15] you aren't going to see 10GB out of these on 11th gen servers - the pci bus isn't quite fast enough - but we do get 7-8 Gbps [21:16] we used the BCM5709 for the pxe installs though, so I'm not sure if the installer supports bnx2x [23:20] what bond mode is best supported with 2 direct links between 2 servers ? so to say 2 crossconnects ? active/backup is no option I think