Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> decommissioning datanodes


+
Chris Grier 2012-06-08, 18:46
Copy link to this message
-
Re: decommissioning datanodes
Your nodes need to be in include and exclude file in the same time
Do you use both files?

On 6/8/12 11:46 AM, "Chris Grier" <[EMAIL PROTECTED]> wrote:

>Hello,
>
>I'm in the trying to figure out how to decommission data nodes. Here's
>what
>I do:
>
>In hdfs-site.xml I have:
>
><property>
>    <name>dfs.hosts.exclude</name>
>    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
></property>
>
>Add to exclude file:
>
>host1
>host2
>
>Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
>nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
>nothing in the Decommissioning Nodes list). If I look at the datanode logs
>running on host1 or host2, I still see blocks being copied in and it does
>not appear that any additional replication was happening.
>
>What am I missing during the decommission process?
>
>-Chris
+
Chris Grier 2012-06-08, 19:15
+
Serge Blazhiyevskyy 2012-06-08, 19:19
+
Chris Grier 2012-06-08, 19:56
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB