Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - large machine configuration


Copy link to this message
-
Re: large machine configuration
Michael Segel 2012-05-18, 12:29
Its an upcoming presentation, if accepted. ;-)

The purpose of the talk is to discuss the design considerations and the ramifications of these decisions.

With respect to your cluster...

When you have 12 core boxes, you want 4GB per core at a minimum. That is 48GB.  If you're running HBase, you will want to go to 64GB.

We recommend this for the following reason:

When tuning the system. you have 12 cores. subtract a core for each major processor... (DN, TT and RS) so you have 9 cores. Since they are hyper-threaded, its like having 18 virtual cores.
That is 18 slots which you can use to set the number of mappers and reducers. If you plan on 2GB per JVM that means you need to reserve 36GB of memory out of your 48.

Add in the memory of the DN, TT, and RS... 48GB is cutting it close. You also have to consider the potential for over subscribing the number of slots, based on monitoring your cluster.

Not enough memory, you swap, you run in to trouble and you start to see cascading failures.

If you look at memory prices... there really isn't a large delta between the two.

You can run in 48, but it doesn't allow for much overhead. 64GB because its a nice round number....

HTH

-Mike

On May 18, 2012, at 5:42 AM, Rita wrote:

> Mike,
>
> Where can I find your talk?
>
> On Fri, May 11, 2012 at 7:51 AM, Rita <[EMAIL PROTECTED]> wrote:
>
>> most of the operations I do with MR are exporting tables and importing
>> tables. Does that still require a lot of memory and does it help to
>> allocate more memory for jobs like that?
>>
>> Yes, I have 12 cores also. Are there any HDFS/MR/Hbase tuning tips for
>> this many processors?
>>
>> btw, 64GB is a lot for us :-)
>>
>>
>>
>> On Fri, May 11, 2012 at 7:29 AM, Michael Segel <[EMAIL PROTECTED]>wrote:
>>
>>> Funny, but this is part of a talk that I submitted to Strata....
>>>
>>> 64GB and HBase isn't necessarily a 'large machine'.
>>>
>>> If you're running w 12 cores, you're talking about a minimum of 48GB just
>>> for M/R.
>>> (4GB a core is a good rule of thumb )
>>>
>>> Depending on what you want to do, you could set aside 8GB of heap and
>>> tune that, but even that might not be enough...
>>>
>>>
>>> On May 11, 2012, at 5:42 AM, Rita wrote:
>>>
>>>> Hello,
>>>>
>>>> While looking at,
>>> http://hbase.apache.org/book.html#important_configurations,
>>>> I noticed large machine configuration section still isnt completed.
>>>> ¨Unfortunately¨, I am running on a large machine which as 64gb of memory
>>>> therefore I would need some help tuning my hbase/hadoop instance for
>>>> maximum performance. Can someone please shed light on what I should look
>>>> into?
>>>>
>>>>
>>>>
>>>> --
>>>> --- Get your facts first, then you can distort them as you please.--
>>>
>>>
>>
>>
>> --
>> --- Get your facts first, then you can distort them as you please.--
>>
>
>
>
> --
> --- Get your facts first, then you can distort them as you please.--