Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Region has been OPENING for too long


Copy link to this message
-
Re: Region has been OPENING for too long
What did you set your max region size?

Sent from a remote device. Please excuse any typos...

Mike Segel

On Oct 31, 2011, at 5:07 AM, Matthew Tovbin <[EMAIL PROTECTED]> wrote:

> Ted,  thanks for such a rapid response.
>
> You're right, we use hbase 0.90.3 from cdh3u1.
>
> So, I suppose I need to make bulk loading in smaller bulks then. Any other
> suggestions?
>
>
> Best regards,
>    Matthew Tovbin =)
>
>>
>>
>> I assume you're using HBase 0.90.x where HBASE-4015 isn't available.
>>
>>>> 5. And so on, till some of Slaves fail with "java.net.SocketException:
>> Too many open files".
>> Do you have some monitoring setup so that you can know the number of open
>> file handles ?
>>
>> Cheers
>>
>> On Sun, Oct 30, 2011 at 7:21 AM, Matthew Tovbin <[EMAIL PROTECTED]> wrote:
>>
>>> Hi guys,
>>>
>>>  I've bulkloaded a solid amount of data (650GB, ~14000 files) into Hbase
>>> (1master + 3regions) and now enabling the table results the
>>> following behavior on the cluster:
>>>
>>>  1. Master says that opening started  -
>>>   "org.apache.hadoop.hbase.master.AssignmentManager: Handling
>>>  transition=RS_ZK_REGION_OPENING, server=slave..."
>>>  2. Slaves report about opening files in progress -
>>>  "org.apache.hadoop.hbase.regionserver.Store: loaded hdfs://...."
>>>  3. Then after ~10 mins the following error occurs on hmaster -
>>>   "org.apache.hadoop.hbase.master.AssignmentManager: Regions in
> transition
>>>  timed out / Region has been OPENING for too long, reassigning
> region=..."
>>>  4. More slaves report about opening files in progress -
>>>  "org.apache.hadoop.hbase.regionserver.Store: loaded hdfs://...."
>>>  5. And so on, till some of Slaves fail with "java.net.SocketException:
>>>  Too many open files".
>>>
>>>
>>> What I've done already to solve the issue (which DID NOT help though):
>>>
>>>  1. Set 'ulimit -n 65536' for hbase user
>>>  2. Set hbase.hbasemaster.maxregionopen=3600000 (1 hour) in
> hbase-site.xml
>>>
>>>
>>> What else can I try?!
>>>
>>>
>>> Best regards,
>>>   Matthew Tovbin =)
>>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB