Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - Issues running a large MapReduce job over a complete HBase table


+
Gabriel Reid 2010-12-06, 08:02
+
Lars George 2010-12-06, 11:33
+
Gabriel Reid 2010-12-06, 12:30
+
Stack 2010-12-06, 18:21
Copy link to this message
-
Re: Issues running a large MapReduce job over a complete HBase table
Gabriel Reid 2010-12-07, 07:15
Hi St.Ack,

The cluster is a set of 5 machines, each with 3GB of RAM and 1TB of
storage. One machine is doing duty as Namenode, HBase Master, HBase
Regionserver, Datanode, Job Tracker, and Task Tracker, while the other
four are all Datanodes, Regionservers, and Task Trackers.

I have a similar setup on another set of five machines with a similar
kind of data, but just a much lower volume of data (500GB), and the
same MapReduce jobs run with no problems there.

The OOME that I was getting on the datanodes when I set the
dfs.datanode.max.xcievers to 8192 was as follows:

2010-12-03 01:33:45,748 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.197.102.14:50010,
storageID=DS-89185505-10.197.102.14-50010-1289322537356,
infoPort=50075, ipcPort=50020):DataXceiveServer: Exiting due
to:java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:640)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:132)
        at java.lang.Thread.run(Thread.java:662)

While I'm certainly willing to try shifting memory allotment around
and/or changing the thread stack size, I'd also like to get a better
idea of if this is due to a huge number of regions (just over ten
thousand regions) and if so, can it be remedied (or could hit have
been remedied) by reducing the number of regions? Or is this just
something that can happen when you have a large quantity of data and
underpowered machines?

Also, any insight as to how 10240 xceivers in total (2048 per machine
* five machines) are being used up while scanning over two column
families of a single table while writing two column families to
another table with 5 task trackers would also be appreciated. It makes
me feel like I might be doing something wrong (ie leaking open
scanners, etc), as I don't understand how so many xceivers are being
used at the same time. On the other hand, my understanding of the
underlying workings is certainly limited, so that could also explain
my lack of understanding of the situation.

Thanks,

Gabriel

On Mon, Dec 6, 2010 at 7:21 PM, Stack <[EMAIL PROTECTED]> wrote:
> Tell us more about your cluster Gabriel.  Can you take 1M from hbase
> and give it to HDFS?  Does that make a difference?  What kinda OOME is
> it?  Whats the message?  You might tune the thread stack size and that
> might give you headroom you need.  How many nodes in your cluster and
> how much RAM they have?
>
> Thanks,
> St.Ack
> P.S. Yes, bigger files could help but OOME in DN is a little unusual.
>
> On Mon, Dec 6, 2010 at 4:30 AM, Gabriel Reid <[EMAIL PROTECTED]> wrote:
>> Hi Lars,
>>
>> All of the max heap sizes are left on their default values (ie 1000MB).
>>
>> The OOMEs that I encountered in the data nodes was only when I put the
>> dfs.datanode.max.xcievers unrealistically high (8192) in an effort to
>> escape the "xceiverCount X exceeds the limit of concurrent xcievers"
>> errors. The datanodes weren't really having hard crashes, but they
>> were getting OOMEs and becoming unusable until a restart.
>>
>>
>> - Gabriel
>>
>> On Mon, Dec 6, 2010 at 12:33 PM, Lars George <[EMAIL PROTECTED]> wrote:
>>> Hi Gabriel,
>>>
>>> What max heap to you give the various daemons? This is really odd that
>>> you see OOMEs, I would like to know what it has consumed. You are
>>> saying the Hadoop DataNodes actually crash with the OOME?
>>>
>>> Lars
>>>
>>> On Mon, Dec 6, 2010 at 9:02 AM, Gabriel Reid <[EMAIL PROTECTED]> wrote:
>>>> Hi,
>>>>
>>>> We're currently running into issues with running a MapReduce job over
>>>> a complete HBase table - we can't seem to find a balance between
>>>> having dfs.datanode.max.xcievers set too low (and getting
>>>> "xceiverCount X exceeds the limit of concurrent xcievers") and getting
>>>> OutOfMemoryErrors on datanodes.
>>>>
>>>> When trying to run a MapReduce job on the complete table we inevitably
+
Stack 2010-12-07, 16:46
+
Gabriel Reid 2010-12-07, 19:30
+
Stack 2010-12-07, 19:49