Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # dev >> Re: Clarifications on excludedNodeList in DFSClient


Copy link to this message
-
Re: Clarifications on excludedNodeList in DFSClient
The excludeNode list is initialized for each output stream created
under a DFSClient instance. That is, it is empty for every new
FS.create() returned DFSOutputStream initially and is maintained
separately for each file created under a common DFSClient.

However, this could indeed be a problem for a long-running single-file
client, which I assume is a continuously alive and hflush()-ing one.

Can you search for and file a JIRA to address this with any discussion
taken there? Please put up your thoughts there as well.

On Mon, Nov 19, 2012 at 3:25 PM, Inder Pall <[EMAIL PROTECTED]> wrote:
> Folks,
>
> i was wondering if there is any mechanism/logic to move a node back from the
> excludedNodeList to live nodes to be tried for new block creation.
>
> In the current DFSClient code i do not see this. The use-case is if the
> write timeout is being reduced and certain nodes get aggressively added to
> the excludedNodeList and the client caches DFSClient then the excludedNodes
> never get tried again in the lifetime of the application caching DFSClient
>
>
> --
> - Inder
> "You are average of the 5 people you spend the most time with"
>

--
Harsh J
+
Harsh J 2012-11-30, 11:50
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB