Nginx-ru mailing list archive (nginx-ru@sysoev.ru)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: connect -1 errno 36, sendfile -1 errno 35, L A и затыки сервера
On Mon, Sep 06, 2010 at 05:10:22PM +0400, cronfy wrote:
> >> В штатном режиме работы между stat и возвратом даже тысячной секунды
> >> не проходит, а тут сотые. Странно, что gstat при этом весь зеленый -
> >> нагрузку на диск показывает обычную или даже ниже. А бывает даже так:
> >> 26333 nginx 14.761612 CALL gettimeofday(0x7fffffffe390,0)
> >> 26333 nginx 14.818053 RET gettimeofday 0
> >> Хотелось бы понять, что это может быть, связано ли с nginx и куда
> >> можно покопать. что еще для диагностики запустить?
> > Что показывает "top -S" ?
>
> Вот top -PS в момент затыка (всего 4 воркера nginx, все в топе):
>
> last pid: 79281; load averages: 98.37, 46.06, 20.99
>
> up 1+14:20:14 16:46:47
> 1102 processes:116 running, 959 sleeping, 6 zombie, 20 waiting, 1 lock
> CPU 0: 8.6 user, 0.0 nice, 91.4 system, 0.0 interrupt, 0.0 idle
> CPU 1: 11.2 user, 0.0 nice, 88.8 system, 0.0 interrupt, 0.0 idle
> CPU 2: 12.3 user, 0.0 nice, 87.7 system, 0.0 interrupt, 0.0 idle
> CPU 3: 7.5 user, 0.0 nice, 92.2 system, 0.0 interrupt, 0.4 idle
> CPU 4: 10.8 user, 0.0 nice, 89.2 system, 0.0 interrupt, 0.0 idle
> CPU 5: 8.2 user, 0.0 nice, 91.4 system, 0.0 interrupt, 0.4 idle
> CPU 6: 8.2 user, 0.0 nice, 91.8 system, 0.0 interrupt, 0.0 idle
> CPU 7: 7.5 user, 0.0 nice, 92.5 system, 0.0 interrupt, 0.0 idle
> CPU 8: 15.7 user, 0.0 nice, 82.5 system, 1.9 interrupt, 0.0 idle
> CPU 9: 17.5 user, 0.0 nice, 82.5 system, 0.0 interrupt, 0.0 idle
> CPU 10: 14.1% user, 0.4% nice, 85.5% system, 0.0% interrupt, 0.0% idle
> CPU 11: 18.3% user, 0.0% nice, 81.7% system, 0.0% interrupt, 0.0% idle
> CPU 12: 11.2% user, 0.4% nice, 88.4% system, 0.0% interrupt, 0.0% idle
> CPU 13: 17.9% user, 0.0% nice, 82.1% system, 0.0% interrupt, 0.0% idle
> CPU 14: 14.9% user, 0.0% nice, 84.7% system, 0.0% interrupt, 0.4% idle
> CPU 15: 10.1% user, 0.0% nice, 89.6% system, 0.0% interrupt, 0.4% idle
> Mem: 3381M Active, 11G Inact, 2846M Wired, 369M Cache, 1851M Buf, 246M Free
> Swap: 14G Total, 1376K Used, 14G Free
>
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 1138 mysql 147 4 0 1513M 1271M sbwait f 5:53 31.93% mysqld
> 74973 ********** 1 -4 0 159M 69324K RUN 4 0:12 14.89% httpd
> 76346 www 1 -4 0 54276K 36940K RUN a 0:13 14.70% nginx
> 77028 ******* 1 -4 0 140M 52796K CPU11 f 0:09 14.36% httpd
> 76348 www 1 99 0 54276K 36932K RUN 7 0:12 13.87% nginx
> 76349 www 1 98 0 54276K 36940K RUN f 0:12 13.87% nginx
> 76347 www 1 -4 0 54276K 36868K ufs e 0:12 13.48% nginx
> 78733 ********** 1 98 0 136M 49648K RUN 6 0:03 10.89% httpd
> 77954 ********* 1 -4 0 147M 58316K ufs 1 0:04 10.60% httpd
> 78145 ********* 1 -4 0 146M 57656K RUN 7 0:04 9.77% httpd
> 78430 ******* 1 98 0 134M 47484K RUN c 0:03 9.67% httpd
> 78768 ********** 1 -4 0 131M 45084K RUN 5 0:02 9.57% httpd
> 78851 diradmin 1 98 0 87648K 20072K RUN e 0:02 9.47% php
> 78552 diradmin 1 -4 0 90720K 21788K RUN 1 0:02 9.28% php
> 78850 diradmin 1 -4 0 87648K 19468K RUN f 0:02 8.50% php
> 78619 ********* 1 97 0 135M 49088K RUN b 0:02 8.06% httpd
> 78971 diradmin 1 -4 0 85600K 18292K RUN e 0:01 7.96% php
> 78254 ****** 1 -4 0 149M 59796K RUN 6 0:03 7.86% httpd
> 95798 ***** 1 54 0 120M 41320K CPU0 0 2:59 7.47% prefork
> 2010 ********* 1 51 0 115M 34644K select 6 6:47 7.37% prefork
> 78769 ********* 1 -4 0 133M 47036K RUN b 0:02 7.37% httpd
> 78897 ******* 1 -4 0 132M 44824K CPU6 6 0:01 7.18% httpd
> 79140 ****** 1 97 0 133M 46932K RUN f 0:01 7.08% httpd
> 78731 ********** 1 -4 0 134M 47640K RUN 4 0:02 6.79% httpd
> 78617 ******* 1 -4 0 142M 53644K RUN 8 0:02 6.69% httpd
> 78866 ******* 1 97 0 130M 44432K RUN 1 0:01 6.49% httpd
> 78849 ****** 1 -4 0 136M 49308K CPU1 1 0:01 6.40% httpd
> 3487 ***** 1 55 0 120M 42348K select b 2:47 6.15% prefork
> 79051 ********** 1 97 0 132M 46244K RUN c 0:01 6.15% httpd
> 79089 ********** 1 -4 0 127M 41788K RUN 8 0:01 6.15% httpd
>
>
> Для сравнения, когда все хорошо:
>
> last pid: 33540; load averages: 3.26, 8.50, 14.50
>
> up 1+14:35:59 17:02:32
> 784 processes: 18 running, 744 sleeping, 2 zombie, 20 waiting
> CPU 0: 37.8 user, 0.7 nice, 10.1 system, 0.4 interrupt, 50.9 idle
> CPU 1: 1.1 user, 0.0 nice, 1.5 system, 0.0 interrupt, 97.4 idle
> CPU 2: 23.6 user, 0.0 nice, 9.0 system, 0.0 interrupt, 67.4 idle
> CPU 3: 0.0 user, 0.0 nice, 2.3 system, 0.0 interrupt, 97.7 idle
> CPU 4: 15.7 user, 0.0 nice, 5.6 system, 0.4 interrupt, 78.3 idle
> CPU 5: 5.2 user, 0.0 nice, 3.0 system, 0.0 interrupt, 91.8 idle
> CPU 6: 13.5 user, 0.0 nice, 7.1 system, 0.0 interrupt, 79.4 idle
> CPU 7: 0.0 user, 0.0 nice, 0.0 system, 0.0 interrupt, 100 idle
> CPU 8: 1.9 user, 0.0 nice, 0.7 system, 2.2 interrupt, 95.1 idle
> CPU 9: 10.9 user, 0.0 nice, 2.2 system, 0.0 interrupt, 86.9 idle
> CPU 10: 1.1% user, 0.0% nice, 0.0% system, 0.8% interrupt, 98.1% idle
> CPU 11: 3.0% user, 0.0% nice, 1.1% system, 0.0% interrupt, 95.9% idle
> CPU 12: 0.4% user, 0.0% nice, 1.9% system, 0.0% interrupt, 97.8% idle
> CPU 13: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle
> CPU 14: 1.9% user, 0.0% nice, 2.2% system, 0.0% interrupt, 95.9% idle
> CPU 15: 3.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 96.6% idle
> Mem: 2640M Active, 11G Inact, 2696M Wired, 547M Cache, 1851M Buf, 716M Free
> Swap: 14G Total, 1376K Used, 14G Free
>
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 10 root 1 171 ki31 0K 16K CPU15 f 35.6H 96.97% idle:
> cpu15
> 15 root 1 171 ki31 0K 16K CPU10 a 35.0H 94.87% idle:
> cpu10
> 12 root 1 171 ki31 0K 16K CPU13 d 35.6H 94.68% idle:
> cpu13
> 22 root 1 171 ki31 0K 16K CPU3 3 32.9H 94.09% idle: cpu3
> 18 root 1 171 ki31 0K 16K CPU7 7 34.0H 93.90% idle: cpu7
> 13 root 1 171 ki31 0K 16K CPU12 c 34.5H 93.65% idle:
> cpu12
> 11 root 1 171 ki31 0K 16K CPU14 e 35.2H 93.46% idle:
> cpu14
> 17 root 1 171 ki31 0K 16K CPU8 8 35.2H 91.70% idle: cpu8
> 14 root 1 171 ki31 0K 16K CPU11 b 33.8H 90.48% idle:
> cpu11
> 20 root 1 171 ki31 0K 16K CPU5 5 31.9H 89.89% idle: cpu5
> 24 root 1 171 ki31 0K 16K CPU1 1 32.5H 88.28% idle: cpu1
> 16 root 1 171 ki31 0K 16K CPU9 9 31.2H 83.79% idle: cpu9
> 19 root 1 171 ki31 0K 16K CPU6 6 29.2H 82.67% idle: cpu6
> 21 root 1 171 ki31 0K 16K RUN 4 28.2H 70.07% idle: cpu4
> 23 root 1 171 ki31 0K 16K RUN 2 24.8H 69.19% idle: cpu2
> 25 root 1 171 ki31 0K 16K CPU0 0 23.7H 59.96% idle: cpu0
> 1138 mysql 147 44 0 1484M 1268M ucond a 5:55 12.16% mysqld
> 71631 ***** 1 4 0 119M 41332K accept 9 3:35 2.98% prefork
> 1616 ****** 1 4 0 109M 31492K accept 4 16:35 1.86% prefork
> 95798 ***** 1 4 0 119M 41340K accept 9 3:32 1.56% prefork
> 3487 ***** 1 4 0 120M 42348K accept 8 3:20 1.46% prefork
> 85641 ***** 1 4 0 119M 41440K accept 4 3:29 1.37% prefork
> 18216 ***** 1 4 0 121M 43612K accept f 3:17 1.37% prefork
> 89671 ***** 1 4 0 119M 41224K accept 2 3:28 1.27% prefork
> 37 root 1 -68 - 0K 16K WAIT 8 27:14 1.17% irq256:
> em0
> 2167 ******** 1 4 0 105M 27748K accept 8 30:12 0.88% prefork
> 2164 ******** 1 4 0 104M 26792K accept e 30:34 0.78% prefork
> 1614 ******** 1 4 0 111M 33128K accept 5 16:31 0.68% prefork
> 1932 ******** 1 4 0 103M 26244K accept 2 9:23 0.68% prefork
> 1852 ******** 1 4 0 102M 24700K accept 6 16:39 0.49% prefork
Похоже на lock contention где-то в ядре на 16 процессорах.
Судя по тому, что замедляются stat(), возможно, что-то связанное
с файловой системой (я как-то читал, что там есть проблемы с
масштабируемостью на много процессоров). Можно попробовать
http://sysoev.ru/nginx/docs/http/ngx_http_core_module.html#open_file_cache
--
Игорь Сысоев
http://sysoev.ru
_______________________________________________
nginx-ru mailing list
nginx-ru@xxxxxxxxx
http://nginx.org/mailman/listinfo/nginx-ru
|