Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> reply: a question about dfs.replication


Copy link to this message
-
reply: a question about dfs.replication
actually, my client side is already set to "2".

 

发件人: Azuryy Yu [mailto:[EMAIL PROTECTED]]
发送时间: Tuesday, July 02, 2013 12:40
收件人: [EMAIL PROTECTED]
主题: Re: reply: a question about dfs.replication

 

It's not HDFS issue.

dfs.replication is a client side configuration, not server side. so you need to set it to '2' on your client side( your application running on). THEN execute command such as : hdfs dfs -put  or call HDFS API in java application.

 

On Tue, Jul 2, 2013 at 12:25 PM, Francis.Hu <[EMAIL PROTECTED]> wrote:

Thanks all of you, I just get the problem fixed through the command:

hdfs dfs -setrep -R -w 2 /

 

Is that an issue of HDFS ? Why do i need to execute manually a command to tell the hadoop the replication factor even it is set in hdfs-site.xml ?

 

Thanks,

Francis.Hu

 

发件人: Francis.Hu [mailto:[EMAIL PROTECTED]]
发送时间: Tuesday, July 02, 2013 11:30
收件人: [EMAIL PROTECTED]
主题: 答复: 答复: a question about dfs.replication

 

Yes , it returns 2 correctly after "hdfs getconf -confkey dfs.replication"

 

 

but in web page ,it is 3 as below:

 

发件人: yypvsxf19870706 [mailto:[EMAIL PROTECTED]]
发送时间: Monday, July 01, 2013 23:24
收件人: [EMAIL PROTECTED]
主题: Re: 答复: a question about dfs.replication

 

Hi

 

    Could you please get the property value by using : hdfs getconf -confkey dfs.replication.
鍙戣嚜鎴戠殑 iPhone
鍦?2013-7-1锛?5:51锛孎rancis.Hu <[EMAIL PROTECTED]> 鍐欓亾锛?br>

 

Actually, My java client is running with the same configuration as the hadoop's . The dfs.replication is already set as 2 in my hadoop's configuration.

So i think the dfs.replication is already overrided by my configuration in hdfs-site.xml. but seems it doesn't work even i overrided the parameter evidently.

 

 

鍙戜欢浜?span lang="EN-US">: 袝屑械谢褜褟薪芯胁 袘芯褉懈褋 [mailto:[EMAIL PROTECTED]]
鍙戦€佹椂闂?span lang="EN-US">: Monday, July 01, 2013 15:18
鏀朵欢浜?span lang="EN-US">: [EMAIL PROTECTED]
涓婚: Re: a question about dfs.replication

 

On 01.07.2013 10:19, Francis.Hu wrote:

Hi, All

 

I am installing a cluster with Hadoop 2.0.5-alpha. I have one namenode and two datanodes. The dfs.replication is set as 2 in hdfs-site.xml. After all configuration work is done, I started all nodes. Then I saved a file into HDFS through java client. nOW I can access hdfs web page: x.x.x.x:50070,and also see the file is already listed in the hdfs list.

My question is:  The replication column in HDFS web page is showing as 3, not 2.  Does anyone know What the problem is?

 

---Actual setting of hdfs-site.xml

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

 

After that, I typed dfsamdin command to check the file:

hdfs fsck /test3/

The result of above command:

/test3/hello005.txt:  Under replicated BP-609310498-192.168.219.129-1372323727200:blk_-1069303317294683372_1006. Target Replicas is 3 but found 2 replica(s).

Status: HEALTHY

 Total size:    35 B

 Total dirs:    1

 Total files:   1

 Total blocks (validated):      1 (avg. block size 35 B)

 Minimally replicated blocks:   1 (100.0 %)

 Over-replicated blocks:        0 (0.0 %)

 Under-replicated blocks:       1 (100.0 %)

 Mis-replicated blocks:         0 (0.0 %)

 Default replication factor:    2

 Average block replication:     2.0

 Corrupt blocks:                0

 Missing replicas:              1 (33.333332 %)

 Number of data-nodes:          3

 Number of racks:               1

FSCK ended at Sat Jun 29 16:51:37 CST 2013 in 6 milliseconds

 

 

Thanks,

Francis Hu

 

If I'm not mistaking "dfs.replication" parameter in config sets only default replication factor, which can be overrided when putting file to hdfs.

 

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB