我有一种情况,其中两个网络服务器设置为nginx作为负载平衡器,并且是后端本身.发行版是Debian Wheezy.两台服务器上的配置相同(四核与32GB内存)
TCP
#/etc/sysctl.conf
vm.swappiness=0
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_timestamps=1
net.ipv4.tcp_sack=1
net.ipv4.ip_local_port_range=2000 65535
net.ipv4.tcp_max_syn_backlog=65535
net.core.somaxconn=65535
net.ipv4.tcp_max_tw_buckets=2000000
net.core.netdev_max_backlog=65535
net.ipv4.tcp_rfc1337=1
net.ipv4.tcp_fin_timeout=5
net.ipv4.tcp_keepalive_intvl=15
net.ipv4.tcp_keepalive_probes=5
net.core.rmem_default=8388608
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 16384 16777216
net.ipv4.tcp_congestion_control=cubic
net.ipv4.tcp_tw_reuse=1
fs.file-max=3000000
Nginx的
#/etc/nginx/nginx.conf
user www-data www-data;
worker_processes 8;
worker_rlimit_nofile 300000;
pid /run/nginx.pid;
events {
worker_connections 8192;
use epoll;
#multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
types_hash_max_size 2048;
server_tokens off;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 5;
open_file_cache_errors on;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_min_length 10240;
gzip_disable "MSIE [1-6].";
}
server {
listen
当从3个客户端模拟连接时
ab -c 200 -n 40000 -q https://www.example.com/static/file.html
为什么我得到
upstream timed out (110: Connection timed out) while connecting to upstream
在nginx日志?静态文件的600个并发连接的上游超时!
在运行ab测试时,我可以在第一个后端节点看到:
# netstat -tan | grep ':8080 ' | awk '{print $6}' | sort | uniq -c
2 LISTEN
55 SYN_SENT
37346 TIME_WAIT
好的,我不喜欢阅读手册,而是回答我的问题:
nginx close upstream connection after request
解决了那么问题是什么:我配置了上游来使用keepalive,但Nginx doc建议在代理位置设置以下选项:
proxy_http_version 1.1;
proxy_set_header Connection "";
就是这样,在后端的TIME_WAIT连接中有一千个没有了,现在只有大约150个,而不是30-40k.