Open up /etc/nginx/nginx.conf and replace the events{} block as follows:
events {
# The key to high performance - have a lot of connections available
worker_connections 19000;
}
# Each connection needs a filehandle (or 2 if you are proxying)
worker_rlimit_nofile 20000;
Now, still in /etc/nginx/nginx.conf but this time in the http{} block, make sure keepalive_requests are off:
Finally, we'll need to make sure nginx has access to the ridiculous number of file handles it's going to need to service all those concurrent requests. We do that by editing /etc/default/nginx:
# Note: You may want to look at the following page before setting the ULIMIT.
# http://wiki.nginx.org/CoreModule#worker_rlimit_nofile
# Set the ulimit variable if you need defaults to change.
# Example: ULIMIT="-n 4096"
ULIMIT="-n 65535"
Almost done, but we're going to tune /etc/sysctl.conf, too, due to the ridiculous amount of traffic we're planning to handle. Tuning the Linux kernel for high network throughput could be a series of articles all on its own, so I'm not going to go into what each of these settings does individually. The Cliffs' Notes are 1) it makes the kernel less likely to interpret the ridiculously high volumes of traffic we're going to be dealing with as some kind of DOS attack and 2) please don't blindly do this on production equipment "because it's awesome." These settings did help get better and more consistent results with the testing we're doing here, but that doesn't necessarily mean they're the best settings everywhere.
TL;DR here are the flood-related tweaks I made in sysctl.conf on the testing servers (not on the router itself):
kernel.sem = 250 256000 100 1024
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_mem = 1440715 2027622 3041430
Once again, a reboot or a sudo sysctl -p applies this stuff, and you're ready to start abusing nginx.
The testing script
This part was pretty simple. Just run a bunch of http tests as described in last month's article: 10K, 100K, and 1MB files fetched with concurrency (number of simultaneous clients running) of 10, 100, 1,000, and 10,000 apiece.
#!/bin/bash
mkdir -p ~/tests
mkdir -p ~/tests/$1
ulimit -n 100000
ab -rt180 -c10 -s 240 http://192.168.99.100/10K.jpg 2>&1 | tee ~/tests/$1/$1-10K-ab-t180-c10-client-on-LAN.txt
ab -rt180 -c100 -s 240 http://192.168.99.100/10K.jpg 2>&1 | tee ~/tests/$1/$1-10K-ab-t180-c100-client-on-LAN.txt
ab -rt180 -c1000 -s 240 http://192.168.99.100/10K.jpg 2>&1 | tee ~/tests/$1/$1-10K-ab-t180-c1000-client-on-LAN.txt
ab -rt180 -c10000 -s 240 http://192.168.99.100/10K.jpg 2>&1 | tee ~/tests/$1/$1-10K-ab-t180-c10000-client-on-LAN.txt
ab -rt180 -c10 -s 240 http://192.168.99.100/100K.jpg 2>&1 | tee ~/tests/$1/$1-100K-ab-t180-c10-client-on-LAN.txt
ab -rt180 -c100 -s 240 http://192.168.99.100/100K.jpg 2>&1 | tee ~/tests/$1/$1-100K-ab-t180-c100-client-on-LAN.txt
ab -rt180 -c1000 -s 240 http://192.168.99.100/100K.jpg 2>&1 | tee ~/tests/$1/$1-100K-ab-t180-c1000-client-on-LAN.txt
ab -rt180 -c10000 -s 240 http://192.168.99.100/100K.jpg 2>&1 | tee ~/tests/$1/$1-100K-ab-t180-c10000-client-on-LAN.txt
ab -rt180 -c10 -s 240 http://192.168.99.100/1M.jpg 2>&1 | tee ~/tests/$1/$1-1M-ab-t180-c10-client-on-LAN.txt
ab -rt180 -c100 -s 240 http://192.168.99.100/1M.jpg 2>&1 | tee ~/tests/$1/$1-1M-ab-t180-c100-client-on-LAN.txt
ab -rt180 -c1000 -s 240 http://192.168.99.100/1M.jpg 2>&1 | tee ~/tests/$1/$1-1M-ab-t180-c1000-client-on-LAN.txt
ab -rt180 -c10000 -s 240 http://192.168.99.100/1M.jpg 2>&1 | tee ~/tests/$1/$1-1M-ab-t180-c10000-client-on-LAN.txt
Make sure you have appropriately sized JPEGs at /var/www/10K.jpg, /var/www/100K.jpg, and /var/www/1M.jpg, and you're ready to go. If you saved the script as test.sh, you invoke it as ./test.sh routername and give it a while. Some 48 minutes later, you'll have a bunch of raw apachebench results in appropriately named files under ~/tests/routername, which will include the throughput numbers we looked at in last month's article along with a fair amount more information that tells you more about the webserver than it does about the network the transactions took place on. (Obviously, the same script with a different IP address and with filenames ending in "-client-on-WAN" was used for the download-side of the testing.)