Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> decommissioning datanodes


+
Chris Grier 2012-06-08, 18:46
+
Serge Blazhiyevskyy 2012-06-08, 18:56
+
Chris Grier 2012-06-08, 19:15
Copy link to this message
-
Re: decommissioning datanodes
Your config should be something like this:

><property>
>    <name>dfs.hosts.exclude</name>
>    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
></property>

><property>
>    <name>dfs.hosts.include</name>
>    <value>/opt/hadoop/hadoop-1.0.0/conf/include</value>
></property>

>
>Add to exclude file:
>
>host1
>host2
>

Add to include file
>host1
>host2
Plus the rest of the nodes
On 6/8/12 12:15 PM, "Chris Grier" <[EMAIL PROTECTED]> wrote:

>Do you mean the file specified by the 'dfs.hosts' parameter? That is not
>currently set in my configuration (the hosts are only specified in the
>slaves file).
>
>-Chris
>
>On Fri, Jun 8, 2012 at 11:56 AM, Serge Blazhiyevskyy <
>[EMAIL PROTECTED]> wrote:
>
>> Your nodes need to be in include and exclude file in the same time
>>
>>
>> Do you use both files?
>>
>> On 6/8/12 11:46 AM, "Chris Grier" <[EMAIL PROTECTED]> wrote:
>>
>> >Hello,
>> >
>> >I'm in the trying to figure out how to decommission data nodes. Here's
>> >what
>> >I do:
>> >
>> >In hdfs-site.xml I have:
>> >
>> ><property>
>> >    <name>dfs.hosts.exclude</name>
>> >    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
>> ></property>
>> >
>> >Add to exclude file:
>> >
>> >host1
>> >host2
>> >
>> >Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the
>>two
>> >nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
>> >nothing in the Decommissioning Nodes list). If I look at the datanode
>>logs
>> >running on host1 or host2, I still see blocks being copied in and it
>>does
>> >not appear that any additional replication was happening.
>> >
>> >What am I missing during the decommission process?
>> >
>> >-Chris
>>
>>
+
Chris Grier 2012-06-08, 19:56