[01:34] any nginx users in the room? [01:41] !ask [01:41] Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience [01:41] the question you just asked is the most annoying one ever to use on irc [01:42] cause if someone does, it annoys them waiting for you to respond [01:45] is there any downside to setting limit_req_zone rate to 1r/s to ensure that the lim_req's are always fielding requests [01:46] i get screamed more about asking the incorrect questions in a channel than asking if people know about the subject [01:46] 10:1 [01:47] downside would be it wouldn't serve someone more than once per second [01:47] it will make your site look *slow* to me [01:47] and I won't use it [01:48] tehcnically it does as of right now, which is what i dont understand [01:48] well, this is why people say you ask incorrect questions [01:48] limit_req_zone $uri zone=dsp_per_ex:10m rate=1r/s; [01:48] limit_req zone=dsp_per_ex burst=10 nodelay; [01:48] alows 10 requests per second [01:48] whowever, if i do rate=10r/s and burst = 1, it ignores the burst [01:48] see how you just change that from, I have a question, to, I have a problem I didn't want to tell you about, but let me hint at it [01:49] hence the 1r/s is the only way to do it [01:49] but im not sure about potential implications of performance [01:49] and dont understand why it would have been designed that way [01:50] how did you test? [01:50] and what does, ignores the burst, mean? [01:50] your skipping steps [01:50] what you did, how you tested, what the test results where, what you expected the results to be [01:51] limit_req_zone req 10,000r/s, server{ location /a {limit_req burst 5,000}, location /b {limit_req burst 1}} [01:52] when i hit domain/a 5000 per sec, i get 5000 204 responses [01:52] when i hit domain/b 5000 per sec, i get 5000 204 responses [01:52] yes, that all looks right [01:53] if i drop limit_req_zone rate to 1r/s, i get 5000 204 responses for /a, 1 204 response for /b, which is what i want [01:54] that sounds odd [01:54] but im not sure if there are underlying latency or ram issues associated with that approach, as it seems like that shouldnt work [01:54] why would you set a burst so high? [01:54] bursts should be kept low [01:54] the traffic i have hitting my cluster is around 10mil req/s [01:54] from 10 companies [01:55] yes, so you want /a to handle 5000 for one second [01:55] then go on at a rate of 1 per second after that first 5000? [01:55] i want say 10 of those kind of endpoints, say /1, /2, /3, etc... [01:55] and then i want to have a /debug which i set to something low [01:56] or.. i want to say choose one of those companies, say /7 [01:56] and force it to burst 1 which would basically be like a block [01:56] you do know what burst means right? [01:56] how many req / s? [01:56] no [01:56] thats how i read it from the docs [01:56] heh? [01:56] how many requests BEFORE it uses the limit_req_zone r/s value [01:57] as it's name says, it's the burst setting [01:57] a normal webpage has say, 50 pictures [01:57] the burst allows you to load all the pictures [01:57] interesting [01:57] before you hit the limit and it slows you down [01:58] oh so that makes more sense then [01:58] so something like 10r/s with a 50 burst [01:58] load page and images [01:58] ok, so if i set the burst to 10,000 [01:58] then as users goes page to page using the same images, they won't go over that 10r/s again [01:58] that means it will handle 10k a sec until they exceed that [01:58] no [01:58] it means it will handle 10k [01:58] not per second [01:59] oh... so there's no way to set a rate then [01:59] using lim_req [01:59] that 10k should be refilled at the rate of r/s setting, 1r/s [01:59] so if i need one endpoint to fire at 10k/s and the other at max 1/s [01:59] hmm, that is what the zone setting does [01:59] you didn't set a zone [01:59] limit_req_zone $uri zone=dsp_per_ex:10m rate=1r/s; [02:00] so if i were to set to, say [02:00] limit_req_zone $uri zone=dsp_per_ex:10m rate=10000r/s; [02:00] I don't see a limit_req zone= [02:00] I don't see a limit_req zone=dsp_per_ex [02:00] well, then i have this: [02:00] limit_req zone=dsp_per_ex burst=1 nodelay; [02:00] which does absolutely nothing [02:01] when rate=10000r/s [02:01] well, that would say 10k per second is allowed [02:01] no bursting [02:01] so just straight 10k/sec [02:01] so then maybe the question should be... is there a way to set separate zone rates per endpoint [02:01] as i tried wrapping limit_req_zone in if clauses, and that didnt work [02:02] why? [02:02] why would you do that? [02:02] because i need to limit the number of requests per endpoint [02:02] yes, did you look at the manual? [02:02] http://nginx.org/en/docs/http/ngx_http_limit_req_module.html [02:02] look at the example right at the top [02:02] define each limit_req_zone [02:02] then below assign that zone to a location [02:03] since each zone has it's own tracking and rates [02:03] each location gets whatever you set [02:03] so you mean something like [02:03] if you need to limit 50 locations, define 50 limit_req_zones [02:04] limit_req_zone "endpoint1" ....; limit_req_zone "endpoint2" ....; etc? [02:04] but I don't see that limit_req_zone wrapped at all do you? [02:04] no [02:04] limit_req_zone zone=endpoint1 .... [02:04] ohhhh [02:05] * patdk-lap has never used or touched nginx before [02:05] another reason your first question was pointless [02:05] limit_req_zone $uri zone=e1 rate=10000r/s; limit_req_zone $uri zone=e2 rate=1r/s [02:05] no [02:05] maybe [02:06] what $uri? [02:06] what are you attempting to limit? [02:06] number of requests per $uri [02:07] I guess [02:07] seems strange to me :) [02:07] i have: domain.com/(16charhash) [02:07] company1 has hash1, and they have say a pool of 10 ips making 1000 qps each [02:08] my issue was is that company3 had a machine at some ip that started wrecking me at 40k qps [02:08] the limit_req is sitting in front of a proxy_pass [02:08] and it was choking the proxy pass [02:09] we told them to cut it out.. it took then 8 hours to turn it down [02:09] if you want to limit the 16charhas to 10k per second, use the e1 limit_req_zone [02:09] then use e2 for the one you want to be lower [02:09] yup.. thats what i was thinking.. basically e2 i would prob name the hash [02:09] like zone=hash_0123456789abcdef [02:09] so i can keep track of fairly easily [02:11] see where it says, limitation is done using leaky bucket method [02:11] i was trying to set the $uri thing.. totally didnt realize you could just make different zones by zone name and query them directly [02:11] the burst is how many it can handle, the r/s is how fast the burst is refilled [02:11] yup, see that [02:11] so you probably will want to set burst=r/s [02:12] well >= [02:12] got it... so if the rate is at 10,000r/s [02:12] and we let them say burst at slightly higher than that [02:12] for your 1r/s, probably set it to something sane, maybe 10, or 1 if you really really want [02:12] if they go over, then theyr'e at th emercy of the r/s catching up [02:12] yep [02:12] so if they sustain higher than 10k, it takes longer nad longer to adjust [02:13] that makes perfect sense [02:13] thanks, appriciate it [02:13] we've been doing consistant hash upstream stuff and all sorts of other stuff.. been literally at it for 2 weeks, so im a bit burnt out [02:14] we've got about 200 4 core machines in the router/upstream config by now [02:14] so its a bit brutal to keep track of [02:17] and im assuming i could make something like [02:17] zone= production rate=10000r/s; zone=choked rate=1/rs [02:18] and just assigned like 10 endpoints to production and 2 to choked and update configs as necessary if people misbehave? [02:18] sure [02:19] you might need to increase that 10M to soemthing larger when using $uri though [02:20] you will know when you start serving up lots of 503's [02:22] got it... the 10M im guessing is for sessions? [02:22] for the hash table [02:23] it will add $uri and how many r/s it can have [02:23] so the longer the $uri and the more uniq $uri's, the more space it will use [02:23] on our entire system, we only have 10 $uri [02:23] since its just a list of 10 companies, 1 hash per [02:23] so not likely to be an issue now [02:24] we do have a similar setup, where its IP address based [02:24] ip's are more predictable, 128bytes per entry [02:24] since those requests come in by uri, and they have POST json which we decode, get an IP address out of, proxy pass and use the consistant hash upstream [02:24] so 10M will allow you like 82k ip's [02:24] and the machiens catching those are going to have limits on them as well per ip.. so that'll happen then [02:25] perfect... we allocated about 4G per machine ram wise for scaling out [02:26] so if your rejecting non-hash uri's before it hits that system, then you don't have to worry about some vaunerability scanner filling that $uri limit [02:27] thats a great point.. we havent seen anything like that as of yet, at least at scale [02:27] it shouldn't matter [02:28] but we are 204'ing those at the moment and dont have a lock on them [02:28] unless you have a *not used much* client [02:28] and they attempt to use it, after the scan was going on and filled it [02:28] for the ones active before the scan, they would be fine [02:29] yeah, we manually load all the hash directives first before we let anything else hit [02:29] no, you can't control this [02:29] it will load and remove them as they are used [02:29] how so? [02:29] the only way to control it, is some other limit or server blocking it [02:30] limit_req_zone $uri zone=e1:10M rate=10000r/s [02:30] it will load the $uri into e1, when something hits the webpage [02:30] if the rate is completely fill, it will likely remove it, to make room for others [02:31] since it's at the full rate, no need to track what the rate is anymore [02:31] think of it more, if you where limiting by ip address [02:31] if it didn't remove them [02:31] only the first 82k ip's to use your server would EVER be able to view your website [02:31] till you restarted nginx [02:32] ahh, got it [02:34] well, the good news about this entire setup is that we purposefully hash so the endpoints are not public [02:34] its not a website per se [02:35] so most of the rogue fires will end up in a sort of blackhole [02:39] thanks again, really appriciate it [03:09] sarnold: Hi! Re nfs problem we discussed the other day, I filed bug #1614261. Got a work around, that did not help much. But thanks for the tip. [03:09] bug 1614261 in nfs-utils (Ubuntu) "RPCSVCGSSDOPT is ignored by boot script" [Undecided,New] https://launchpad.net/bugs/1614261 [03:58] bosco, joing #yourhttpserver === JanC is now known as Guest65624 === JanC_ is now known as JanC [05:56] Hi can anyone help with constant intermittend email/dovecot imap authentication problems. I keep getting errors: pam_unix(dovecot:auth): authentication failure; logname= uid=0 euid=0 tty=dovecot ruser=info-website.co.uk rhost=::1 user=info-website.co.uk [05:56] also this error: auth: PAM unable to dlopen(pam_systemd.so): /lib/security/pam_systemd.so: cannot open shared object file: No such file or directory [05:57] I don't know if that has anything to do with the auth problem but I get this too: auth: PAM adding faulty module: pam_systemd.so [05:57] I'm running ubuntu 14.04 LTS === markthomas_ is now known as markthomas === andyjones2001_ is now known as andyjones2001 === froike- is now known as froike === beisner- is now known as beisner === fidothe_ is now known as fidothe === NetworkingPro_ is now known as NetworkingPro === masACC is now known as maswan === Sling_ is now known as Sling === Pici` is now known as Pici [13:57] question, if you were setting up a HA/Load Balancing web server (two hosts) would you setup the MySQL server on the master and configure the slave for replication or host the MySQL server on a third host? === JanC_ is now known as JanC [14:33] sikun: I'm working on a HA setup of MariaDB (the MySQL fork) and my plan is a MariaDB Galera cluster with a couple of haproxy servers in front with a master/slave setup with Corosync/Pacemaker [14:36] sikun: this will be in an all-virtualised environment (we have 250ish VMs currently, physical machines are down to a small fraction of that) [14:42] RoyK, ah nice [14:43] RoyK, what I'm building I am utilizing physical boxes until the funds are available to build a proper virtualization environment that is capable of HA [14:44] or I should say until I can prove the need for the equipment to obtain a loan or whatnot [14:45] two machines should be sufficient for HA with KVM - I setup a test system with that a few years back [14:45] separate, shared storage is recommended [14:46] sadly, not the hypervisor I use but still two machines is good enough for what I do use. [14:46] which hypervisor is that? [14:46] Hyper-V [14:46] *blargh* [14:46] lol [14:48] I used to work with Hyper-V, and although that was four years ago and I guess a lot has happened since then, I really didn't like it [14:48] I have one host right now, it is a bit old.. but it still performs amazingly and with the hardware upgrades I ordered that should be here next week it'll at least last me a good 6 months to a year [14:49] I hear that a lot, and I have worked with it since 2008 when it was total garbage.. but it really has become a very good hypervisor [14:49] I even setup a KVM system along it to run Linux VM's, since any ubuntu VM we tried to put on Hyper-V lost its network connection on high load, no error messages, nothing in the logs, neither on the ubuntu machines nor on the hyper-v hosts [14:50] this was on win2k8, yes [14:50] the data center I work at, they used Hyper-V way back when and would have VMs just disappear [14:50] HAHA [14:50] and that was also on 2k8 [14:51] 2k12r2 is fantastic, I'm very excited for 2k16 to release [14:51] I try to use the Hyper-V 2012R2 core install for a vm host whenever I can [14:51] I started working with vmware some 3-4 years back and I'm rather excited on what it can do and how things just work [14:52] how many hosts? [14:52] VMware is good, I'm not saying it isn't by any means.. but I extremely dislike the licensing.. how much you have to pay to be able to do certain things [14:52] I didn't say it was cheap [14:52] lol, true [14:53] well I downsized the hosts to one at the moment I had 4 [14:53] the one remaining host was purchased outright so that's why I still have it [14:54] what sort of hw? [14:55] I'm working on getting an environment ready for a possible client, I have a meeting with him next week and if he decides he wants to move all of his services to my infrastructure, I'll be extremely happy but also stressed as hell, lol [14:55] HP Proliant [14:55] blade things? [14:55] it's dated... it does need to be decommissioned but I'm going to offset the load on it by using other servers [14:56] no, it is a DL160 G6 [14:56] we only use blade servers these days [14:56] dual xeon quad core, 96GB of RAM with more RAM that should be coming next week along with all new hard drives. [14:57] that's what I'm looking into for a replacement for everything [14:57] that is, we actually bought a 4U server a few months back, since some of our scientists insist of using Stata on Windows when they really should have been using R on a supercomputer [14:57] throw a Intel Xeon Phi or a Kepler in a server and let em go to town [14:58] quad socket dell thing with four 8-core CPUs clocked the highest we got and half a terabyte of RAM [14:58] very nice [14:58] $50k or so :P [14:58] I was checking out the specs of a Lenovo rack server, I think it was Lenovo, but had dual 24 core Xeons [14:59] the data center I work at, their VMware cluster is total garbage.. [15:00] we have three clusters [15:00] one for test/lab/etc [15:00] one for the important stuff with new hosts [15:01] and one with older machines (and thus older instruction sets, say, 4-5 yo) for medium importance machines [15:01] resources are so low of this cluster... I can't even get my requests for a test VM to be spun up, oh wow.. it can't afford a 512MB, 4vCPU and 20GB for 3 days? [15:01] pathetic [15:02] I end up having to utilize my personal equipment to spin up test VMs [15:03] all my personal hardware is old, don't get me wrong.. but even when it was at 90% utilization it would outperform that VMware cluster [15:05] hehe - perhaps they should get a few new hosts ;) [15:05] ha... we have 9.2% free space on the SAN, we're screwed because getting a $50k SAN isn't going to happen anytime soon [15:05] beefed up with a ton of memory, since that's where the bottleneck usually is [15:06] they are actually loaded with RAM [15:06] what sort of SAN? [15:06] the SAN is garbage [15:06] Dell EqualLogic [15:06] haha - we have EQL as well [15:06] and I know exactly what you mean [15:08] the whole load balancing question is actually a potential client of mine, not the company I work for. ha [15:10] even colo'ing in the data center where I work isn't cheap... I have occasionally got some bigger discounts by allowing them to temporarily utilize my hardware [15:13] apropos load balancing... we have two shelves with 100TiB net storage each in an equallogic storage group, and they're supposed to stripe across the two. Curiously, lately one of them has been running at 100% utilisation while the other is at 60% [15:14] slightly odd [15:14] not throwing any errors? [15:14] no, and the only debug reports you can get out of an equallogic system, are encrypted with Dell's public key so they can only be read by Dell [15:16] omg I hate that bull shit.. I want to rip the two Barracuda spam filters out of the rack and go all office space on them because of that same reason... oh hey, they the twins are pegged at 100% CPU utilization and the queue is now up to 3k messages but can I diagnose what's wrong? nope [15:17] and of course barracuda support when the remote in are always like, lets reboot these quick... reboot it and I will hunt you down and beat you to death with a keyboard [15:19] hmm... three EMC Isilon IQ36000X 36TB units for $2,500 [15:21] I'd rather use something homegrown [15:22] like some boxes with ZFS and iSCSI [15:22] ah [15:22] I have yet to play around with ZFS in detail [15:22] I've worked with it for 6-7 years [15:23] nice [15:25] on my Proliant DL160 G6, damn RAID controller failed.. hopefully the one I ordered shows up Monday [15:26] sikun: I guess we should take this to #ubuntu-offtopic before someone complains ;) [15:26] good idea [15:58] i've noticed a weird thing with an lxc container on 16.04. inside the container 'free -m' reports 350mb used, while lxc-info reports "memory use" as 20gb. for other containers the numbers match. [16:50] hi hope someone can help me. I try to install ejabberd on Ubuntu 16.04 from the repos. But installation always fails because of a missing pid file: [16:50] ejabberd.service: PID file /run/ejabberd/ejabberd.pid not readable (yet?) after start: No such file or directory [16:50] has anybody a clou what to do? [17:07] anyone know of a reason why my system could be completely ignoring my sysctl time_wait setting? [17:07] it's just not interested in using that value [17:08] ? === Daviey_ is now known as Daviey [20:46] Hi [20:46] I'm observing strange ssh issue [20:46] When I try to login as root it works like a charm [20:46] But when I'm trying to login using ldap + active directory + pam - it takes up to 30-60 seconds === RoyK^_ is now known as RoyK [20:51] Did you check DNS connectivity? [21:04] Yeah bekks, I think DNS is working fine [21:05] Do you think or did you check? :) [21:09] Hi has anyone upgraded from Ubuntu server 14.04.3 to 16.04 - going to do it shortly and wanted to check any problems [21:33] also I'm running an older kernel on my server 14.04, does anyone know if linux-image-generic or linux-image-server is installed as default and I could reinstall to get the latest stable kernel again? Or do server ubuntu installs not have this package installed by default? [22:05] bekks: I don't know how to verify this :( [22:06] you can do that using nslookup and/or dig === med_ is now known as Guest25180 [22:23] bekks: But what should I check? :) [22:23] I mean what can causing problem? [22:23] Failed DNS responses, timeouts while waiting, switching to the next DNS server available, etc.