Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Retry question


Copy link to this message
-
Re: Retry question
HDFS is precisely built with these concerns in mind.
If you read a 60 GB file and the rack goes down, the system
will present to you transparently another copy, based on your
replication factor.
A block can not be available too due to corruption, and in this case,
it can be replicated to other live machines and fix the error with
the fsck utility.

Regards

On 3/18/2012 9:46 AM, Rita wrote:
> My replication factor is 3 and if I were reading data thru libhdfs using
> C is there a retry method? I am reading a 60gb file and what would will
> happen if a rack goes down and the next block isn't available? Will the
> API retry? is there a way t configuration this option?
>
>
> --
> --- Get your facts first, then you can distort them as you please.--

--
Marcos Luis Ort�z Valmaseda (@marcosluis2186)
  Data Engineer at UCI
  http://marcosluis2186.posterous.com

10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB