Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Understand dfs.datanode.max.xcievers


+
Dhanasekaran Anbalagan 2013-03-17, 13:12
Copy link to this message
-
Re: Understand dfs.datanode.max.xcievers
dfs.datanode.max.xcievers value should set across the cluster rather than
particular DataNode.
It means the upper bound on the number of files that the DataNode will
serve at any one time.

2013/3/17 Dhanasekaran Anbalagan <[EMAIL PROTECTED]>

> Hi Guys,
>
>  We are having few data nodes in an inconsistent state.  frequently goes
> dead state. Because of DataXceiver Error
>
> In console we seen the Error.
> *INFO hdfs.DFSClient: Could not obtain block*
> blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException:
>
> currently we have this value.
> dfs.datanode.max.xcievers, dfs.datanode.max.transfer.threads = 4096
>
> currently monitering our cluster JVM monitoring using openNMS,
> http://i.imgur.com/E1u0fev.png
>
> Not at all hit 4096 value this value for this week . But frequently
> the particular node goes dead state why? log says DataXceiver Error.
>
>
>  dfs.datanode.max.xcievers value set to the particular node or across the
> cluster Please guide me.
>
> -Dhanasekaran.
>
>
>
>
> Did I learn something today? If not, I wasted it.
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB