-Re: Retry question
Harsh J 2012-03-18, 21:17
The libhdfs picks up from the same configuration source as other
components (i.e., from hdfs-site.xml, via launch classpath).
Losing a complete rack, on a rack-aware and healthy HDFS cluster, will
not cause failures in reads or writes.
On Mon, Mar 19, 2012 at 2:14 AM, Rita <[EMAIL PROTECTED]> wrote:
> In the libhdfs how can I throttle the number of retries?
> On Sun, Mar 18, 2012 at 1:12 PM, Marcos Ortiz <[EMAIL PROTECTED]> wrote:
>> HDFS is precisely built with these concerns in mind.
>> If you read a 60 GB file and the rack goes down, the system
>> will present to you transparently another copy, based on your
>> replication factor.
>> A block can not be available too due to corruption, and in this case,
>> it can be replicated to other live machines and fix the error with
>> the fsck utility.
>> On 3/18/2012 9:46 AM, Rita wrote:
>>> My replication factor is 3 and if I were reading data thru libhdfs using
>>> C is there a retry method? I am reading a 60gb file and what would will
>>> happen if a rack goes down and the next block isn't available? Will the
>>> API retry? is there a way t configuration this option?
>>> --- Get your facts first, then you can distort them as you please.--
>> Marcos Luis Ortíz Valmaseda (@marcosluis2186)
>> Data Engineer at UCI
>> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS
>> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
> --- Get your facts first, then you can distort them as you please.--