Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> namenode crash in centos. can anybody recommend jdk ?


Copy link to this message
-
Re: namenode crash in centos. can anybody recommend jdk ?
On Mon, Aug 16, 2010 at 10:51 AM, Michael Thomas <[EMAIL PROTECTED]> wrote:
> We recently discovered the same thing happen to our SNN as well.  Heap too
> small == 100% cpu utilization and no checkpoints.
>
> --Mike
>
> On 08/16/2010 06:35 AM, Brian Bockelman wrote:
>>
>> By the way,
>>
>> Our experience is that if you allocate a too small heap space (we had a
>> cluster of about 600TB running with a 1GB heap), you do get some really
>> strange effects.  I can't recall any random crashes, but I do recall
>> performing a "fsck" would effectivly lock up the namenode JVM for minutes
>> while it spent an increasing amount of time in GC routines.
>>
>> Brian
>>
>> On Aug 16, 2010, at 4:49 AM, Steve Loughran wrote:
>>
>>> On 13/08/10 22:24, Allen Wittenauer wrote:
>>>>
>>>> On Aug 13, 2010, at 11:41 AM, Jinsong Hu wrote:
>>>>
>>>>>
>>>>> and run the namenode with the following jvm config
>>>>> -Xmx1000m  -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
>>>>> -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError
>>>>> -XX:+UseCompressedOops -XX:+DoEscapeAnalysis -XX:+AggressiveOpts  -Xmx2G
>>>>
>>>> Only 2g heap for NN?
>>>>
>>>>> This crashing problem doesn't happen with a small cluster of 4
>>>>> datanodes. but it happens with a cluster of 17 datanodes.
>>>>
>>>> Bump your heap up and try again.
>>>>
>>>>
>>>
>>> 1. I'd worry about some of the XX options too, try turnining off the more
>>> bleeding edge features
>>> -XX:+DoEscapeAnalysis -XX:+AggressiveOpts
>>>
>>> 2. There's an -Xmx1000m  and a -Xmx2G  -you may only get a 1Gb heap which
>>> is less than I'd use for an IDE.
>>
>
>
>

NameNode needs more memory with more files. Even larger file names use
more memory. CompressedOops helps. The SNN needs more memory then the
NN. Remember since Java garbage collects with threads lazily, load on
the application usually drives up your memory usage significantly.

You can do a Hadoop Proof of Concept with 2GB, but in production 8GB
is the minimum.