Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Too Many CLOSE_WAIT make performance down


Copy link to this message
-
Too Many CLOSE_WAIT make performance down
Version:
HBase: 0.94.3
HDFS: 0.20.*

There are too many CLOSE_WAIT connection from RS to DN, and I find the
number is over 30000.
And change the Log-Level of 'org.apache.hadoop.ipc.HBaseServer.trace' to
DEBUG, and check that the performance:

> Call #2649932; Served: HRegionInterface#get queueTime=0 processingTime=284
> contents=1 Get, 86 bytes
>
> So the conclusion is that when DataNode server port has been occupied by
normal or  irregular connection, it will bring read/write performance down.

According to principle of TCP/IP protocol, CLOSE_WAIT means that RS cannot
close fd which has been opened, and I restart RS gracefully, the problem
has been tackled. Ok, My question is :

Can someone tell me in which conditions do RS will ignore the file handler?

Any ideas will be nice.

Thanks!

--
Bing Jiang
Tel:(86)134-2619-1361
weibo: http://weibo.com/jiangbinglover
BLOG: www.binospace.com
BLOG: http://blog.sina.com.cn/jiangbinglover
Focus on distributed computing, HDFS/HBase
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB