Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> replication


Copy link to this message
-
Re: Replication
Ajay,

Some of your client programs are creating files with replication
factor 3 explicitly (or are using a default configuration instead of
your configuration, which sets it to 3). The dfs.replication factor
config isn't cluster-wide as I mentioned before, and a client may
override it (or override via a direct integer value via the Java API)
at will. You need to fix this from the client side.

On Fri, Feb 17, 2012 at 11:42 AM, ajay.bhosle <[EMAIL PROTECTED]> wrote:
> Hi Denny/Harsh,
>
>
>
> Apart from the below configuration mentioned by me, I also ran the setrep
> command to set the replication for existing files. But after some days when
> I get the block location report it again has a long list of errors
> mentioning “Target Replicas is 3 but > found 2 replica(s).” .
>
>
>
> Thanks
>
> Ajay
>
>
>
> From: Denny Ye [mailto:[EMAIL PROTECTED]]
> Sent: Friday, February 17, 2012 8:10 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Replication
>
>
>
> hi Ajay
>
>     Does the file that related with that
> blockId(blk_4884628009930930282_210741) has being existed at HDFS?
>
>     Your setting is right for new file to HDFS after the configuration to
> take effect.
>
>
>
> -Regards
>
> Denny Ye
>
>
>
> 2012/2/15 Harsh J <[EMAIL PROTECTED]>
>
> Ajay,
>
> Replication is a per-file property.
>
> To lower replication factor of existing files, run (as an HDFS
> superuser, for a good measure):
> $ hadoop fs -setrep -R 3 /
>
>
> On Wed, Feb 15, 2012 at 5:39 PM, ajay.bhosle <[EMAIL PROTECTED]>
> wrote:
>> Hi,
>>
>>
>>
>> I have set replication as 1 in hdfs-site.xml as given below,
>>
>>
>>
>> property>
>>
>>     <name>dfs.replication</name>
>>
>>     <value>1</value>
>>
>>     <description>Default block replication.
>>
>>        The actual number of replications can be specified when the file is
>> created.
>>
>>        The default is used if replication is not specified in create time.
>>
>>     </description>
>>
>>  </property>
>>
>>
>>
>> But I get the below errors when I retrieve the block report from hdfs. Can
>> someone please help me if I am missing anything.
>>
>>
>>
>> “Under replicated blk_4884628009930930282_210741. Target Replicas is 3 but
>> found 2 replica(s).”
>>
>>
>>
>> Thanks
>>
>> Ajay
>>
>>
>
>
> --
> Harsh J
> Customer Ops. Engineer
> Cloudera | http://tiny.cloudera.com/about
>
>

--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB