Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> How to start Data Replicated Blocks in HDFS manually.


+
Dhanasekaran Anbalagan 2013-02-25, 14:15
+
Nitin Pawar 2013-02-25, 14:20
Copy link to this message
-
Re: How to start Data Replicated Blocks in HDFS manually.
The problem may be in default replication factor, which is 3, so first
check in hdfs-site.xml for replication factor is specified or not. if it
not add that parameter and restart the cluster   ---> first option

2nd Option : change the replicatio factor of the root directory of hdfs to
2 using following command

bin/hadoop dfs -setrep -R -w 2 /

this will chage the replication factor 2 two.

this problem may be also because you have two datanodes and replication
factor is 3. so you can think of the senario when you have two bucket and
you have 3 objects to keep.

Shashwat Shriparv

On Mon, Feb 25, 2013 at 7:50 PM, Nitin Pawar <[EMAIL PROTECTED]>wrote:

> did you start the cluster with replication factor 3 and later changed it
> to 2?
> also did you enable rack awareness in your configs and both the nodes are
> on same rack?
>
>
>
>
> On Mon, Feb 25, 2013 at 7:45 PM, Dhanasekaran Anbalagan <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Guys,
>>
>> We have cluster with two data nodes. We configured data replication
>> factor two.
>> when  i copy data  to hdfs, Data's are not fully replicated. It's says * Number
>> of Under-Replicated Blocks : 15115*
>> How to manually invoke the Data replication in HDFS.
>>
>> I restarted cluster also. It's not helps me
>>
>> Please guide me guys.
>>
>> -Dhanasekaran.
>>
>> Did I learn something today? If not, I wasted it.
>>
>
>
>
> --
> Nitin Pawar
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB