Nginx-ru mailing list archive (nginx-ru@sysoev.ru)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re[2]: nginx для отдачи больших файлов
> Когда проблемы со скоростью - что показывает netstat -m и vmstat -z ?
netstat -m показывает следующее:
2633/1267/3900 mbufs in use (current/cache/total)
285/223/508/25600 mbuf clusters in use (current/cache/total/max)
285/193 mbuf+clusters out of packet secondary zone in use (current/cache)
4/190/194/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
1244K/1522K/2767K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
2297/4409/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
14407662 requests for I/O initiated by sendfile
0 calls to protocol drain routines
vmstat -z показывает это:
ITEM SIZE LIMIT USED FREE REQUESTS FAILURES
UMA Kegs: 128, 0, 81, 9, 81, 0
UMA Zones: 480, 0, 81, 7, 81, 0
UMA Slabs: 64, 0, 1559, 742, 2610815, 0
UMA RCntSlabs: 104, 0, 448, 181, 54966074, 0
UMA Hash: 128, 0, 2, 28, 6, 0
16 Bucket: 76, 0, 12, 38, 54, 0
32 Bucket: 140, 0, 13, 43, 64, 0
64 Bucket: 268, 0, 16, 26, 118, 11
128 Bucket: 524, 0, 1091, 547, 4301205, 3198
VM OBJECT: 124, 0, 13669, 47897, 283760495, 0
MAP: 140, 0, 7, 21, 7, 0
KMAP ENTRY: 68, 57344, 4095, 1169, 252230719, 0
MAP ENTRY: 68, 0, 17055, 4897, 1877824034, 0
DP fakepg: 72, 0, 0, 53, 10, 0
mt_zone: 1024, 0, 238, 126, 238, 0
16: 16, 0, 3266, 997, 265381849, 0
32: 32, 0, 2485, 905, 182445466, 0
64: 64, 0, 8405, 3926, 4996948402, 0
128: 128, 0, 2522, 718, 322537685, 0
256: 256, 0, 3611, 199, 276136230, 0
512: 512, 0, 148, 132, 5514109, 0
1024: 1024, 0, 117, 335, 153608373, 0
2048: 2048, 0, 315, 195, 129613, 0
4096: 4096, 0, 412, 147, 111997880, 0
Files: 72, 0, 963, 1475, 983193942, 0
TURNSTILE: 76, 0, 1065, 1431, 42995, 0
umtx pi: 52, 0, 0, 0, 0, 0
PROC: 684, 0, 141, 744, 4732007, 0
THREAD: 516, 0, 898, 166, 39036148, 0
UPCALL: 44, 0, 0, 0, 0, 0
SLEEPQUEUE: 32, 0, 1065, 1308, 42995, 0
VMSPACE: 232, 0, 91, 742, 4731957, 0
audit_record: 856, 0, 0, 0, 0, 0
mbuf_packet: 256, 0, 285, 193, 8474417812, 0
mbuf: 256, 0, 2809, 478, 39414218470,
0
mbuf_cluster: 2048, 25600, 478, 30, 3968083430, 0
mbuf_jumbo_pagesize: 4096, 12800, 5, 189, 4233891254, 0
mbuf_jumbo_9k: 9216, 6400, 0, 0, 0, 0
mbuf_jumbo_16k: 16384, 3200, 0, 0, 0, 0
mbuf_ext_refcnt: 4, 0, 2293, 752, 2423677261, 0
ACL UMA zone: 388, 0, 0, 0, 0, 0
g_bio: 132, 0, 61, 432, 931727006, 0
ata_request: 192, 0, 22, 258, 332330252, 0
ata_composite: 184, 0, 0, 0, 0, 0
VNODE: 272, 0, 10517, 55003, 4539288165, 0
VNODEPOLL: 64, 0, 0, 0, 0, 0
NAMEI: 1024, 0, 0, 284, 10168836930,
0
S VFS Cache: 68, 0, 2686, 6498, 60599106, 0
L VFS Cache: 291, 0, 8204, 194, 4517020750, 0
DIRHASH: 1024, 0, 1512, 96, 122432, 0
NFSMOUNT: 472, 0, 0, 0, 0, 0
NFSNODE: 456, 0, 0, 0, 0, 0
pipe: 396, 0, 55, 225, 11941886, 0
ksiginfo: 80, 0, 841, 71, 842, 0
itimer: 220, 0, 0, 0, 0, 0
KNOTE: 68, 0, 656, 856, 229931990, 0
socket: 396, 12330, 3226, 3594, 138922304, 0
ipq: 32, 904, 0, 226, 161, 0
udpcb: 180, 12342, 14, 74, 10391180, 0
inpcb: 180, 12342, 3303, 3627, 68089475, 0
tcpcb: 464, 12328, 2791, 1769, 68089475, 0
tcptw: 52, 2520, 512, 2008, 22035374, 13453728
syncache: 100, 15366, 2, 193, 54172448, 0
hostcache: 76, 15400, 761, 39, 294588, 0
tcpreass: 20, 1690, 0, 507, 8678195, 0
sackhole: 20, 0, 10, 328, 26122974, 0
sctp_ep: 804, 12330, 0, 0, 0, 0
sctp_asoc: 1400, 40000, 0, 0, 0, 0
sctp_laddr: 24, 80040, 0, 145, 2, 0
sctp_raddr: 396, 80000, 0, 0, 0, 0
sctp_chunk: 92, 400008, 0, 0, 0, 0
sctp_readq: 76, 400000, 0, 0, 0, 0
sctp_stream_msg_out: 60, 400050, 0, 0, 0, 0
sctp_asconf_ack: 24, 400055, 0, 0, 0, 0
ripcb: 180, 12342, 0, 44, 64, 0
unpcb: 168, 12328, 56, 404, 60441584, 0
rtentry: 120, 0, 17, 47, 783, 0
Mountpoints: 668, 0, 8, 10, 9, 0
FFS inode: 132, 0, 10479, 12634, 4539212325, 0
FFS1 dinode: 128, 0, 0, 0, 0, 0
FFS2 dinode: 256, 0, 10479, 5781, 4539212325, 0
SWAPMETA: 276, 121576, 555, 10239, 749341, 0
IPFW dynamic rule: 108, 0, 2888, 1360, 56839876, 0
> Попробуй sendfile off;
> Какая длина очереди на винт в среднем (смотреть gstat-ом)?
gstat выдает:
dT: 1.001s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
15 83 83 7674 194.7 0 0 0.0 99.7| ad7
15 83 83 7674 194.7 0 0 0.0 99.7| ad7s1
0 22 0 0 0.0 22 576 1.6 3.3| ad12
0 0 0 0 0.0 0 0 0.0 0.0| ad7s1c
15 83 83 7674 194.8 0 0 0.0 99.7| ad7s1d
0 22 0 0 0.0 22 576 1.6 3.3| ad12s1
0 0 0 0 0.0 0 0 0.0 0.0| ad12s1a
0 0 0 0 0.0 0 0 0.0 0.0| ad12s1b
0 0 0 0 0.0 0 0 0.0 0.0| ad12s1c
0 0 0 0 0.0 0 0 0.0 0.0| ad12s1d
0 0 0 0 0.0 0 0 0.0 0.0| ad12s1e
0 0 0 0 0.0 0 0 0.0 0.0| ad12s1f
0 0 0 0 0.0 0 0 0.0 0.0| ad12s1g
0 22 0 0 0.0 22 576 1.6 3.3| ad12s1h
> Поставь кол-во воркеров раза в два больше среднего размера очереди.
Я так понимаю, длина очереди на винт с файлами около 15 и нужно ставить около
30 воркеров.
> Следи чтобы nginx не писал проксируемый контент на диск активно
> (/var/tmp/nginx если не ошибаюсь) - лучше буфера подыми.
>
Если можно, про этот момент поподробнее. Как определить, насколько активно он
пишет туда и какие буфера нужно поднимать?
|