Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Region server failure question


+
Mohit Anchlia 2012-08-01, 18:41
+
Mohammad Tariq 2012-08-01, 19:52
+
Mohit Anchlia 2012-08-02, 04:01
Copy link to this message
-
Re: Region server failure question
That is correct, the client blocks and retries a configurable amount of time until the regions are available again.

Lars

On Aug 2, 2012, at 7:01 AM, Mohit Anchlia <[EMAIL PROTECTED]> wrote:

> On Wed, Aug 1, 2012 at 12:52 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>
>> Hello Mohit,
>>
>>     If replication factor is set to some value > 1, then the data is
>> still present on some other node(perhaps within the same rack or a
>> different one). And, as far as this post is concerned it tells us
>> about Write Ahead Logs, i.e data that is still not written onto the
>> disk. This is different from the data written in HFiles, i.e the
>> persistent data. If the regionserver fails while the data is still
>> being written, the data can be recovered by replaying the edits from
>> the WAL file. Please let me know if you disagree.
>>
>>
> I understand that there is no data loss. However, it looks like that all
> the regions on specific region server is unavailable until it comes back
> up? It looks like that all client read and write calls for those key ranges
> would fail until new region server split the logs and bring the regions up.
>
>
>> Regards,
>>    Mohammad Tariq
>>
>>
>> On Thu, Aug 2, 2012 at 12:11 AM, Mohit Anchlia <[EMAIL PROTECTED]>
>> wrote:
>>> I was reading blog
>>> http://www.cloudera.com/blog/2012/07/hbase-log-splitting/ and
>>> it looks like if region servers fails then all the regions on that region
>>> servers are unavailable until a regions are assigned to a different
>> region
>>> server. Does it mean all the key ranges for the failed region server is
>>> unavailable for reads and writes until regions are available on some
>> other
>>> region server? If so then how to deal with failures while real time data
>>> might be flowing into HBase.
>>