Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Re: The minimum memory requirements to datanode and namenode?


Copy link to this message
-
Re: The minimum memory requirements to datanode and namenode?
Nitin Pawar 2013-05-13, 06:00
just one node not having memory does not mean your cluster is down.

Can you see your hdfs health on NN UI?

how much memory do you have on NN? if there are no jobs running on the
cluster then you can safely restart datanode and tasktracker.

Also run a top command and figure out which processes are taking up the
memory and for what purpose?
On Mon, May 13, 2013 at 11:28 AM, sam liu <[EMAIL PROTECTED]> wrote:

> Nitin,
>
> In my cluster, the tasktracker and datanode already have been launched,
> and are still running now. But the free/available mem of node3 now is just
> 167 mb, and do you think it's the reason why my hadoop is unhealthy now(it
> does not return result of command 'hadoop dfs -ls /')?
>
>
> 2013/5/13 Nitin Pawar <[EMAIL PROTECTED]>
>
>> Sam,
>>
>> There is no formula for determining how much memory one should give to
>> datanode and tasktracker. Ther formula is available for how many slots you
>> want to have on a machine.
>>
>> In my prior experience, we did give 512MB memory each to a datanode and
>> tasktracker.
>>
>>
>> On Mon, May 13, 2013 at 11:18 AM, sam liu <[EMAIL PROTECTED]> wrote:
>>
>>> For node3, the memory is:
>>>                    total       used       free     shared    buffers
>>> cached
>>> Mem:          3834       3666        167          0        187       1136
>>> -/+ buffers/cache:       2342       1491
>>> Swap:         8196          0       8196
>>>
>>> To a 3 nodes cluster as mine, what's the required minimum free/available
>>> memory for the datanode process and tasktracker process, without running
>>> any map/reduce task?
>>> Any formula to determine it?
>>>
>>>
>>> 2013/5/13 Rishi Yadav <[EMAIL PROTECTED]>
>>>
>>>> can you tell specs of node3. Even on a test/demo cluster, anything
>>>> below 4 GB ram makes the node almost inaccessible as per my experience.
>>>>
>>>>
>>>>
>>>> On Sun, May 12, 2013 at 8:25 PM, sam liu <[EMAIL PROTECTED]>wrote:
>>>>
>>>>> Got some exceptions on node3:
>>>>> 1. datanode log:
>>>>> 2013-04-17 11:13:44,719 INFO
>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
>>>>> blk_2478755809192724446_1477 received exception
>>>>> java.net.SocketTimeoutException: 63000 millis timeout while waiting for
>>>>> channel to be ready for read. ch :
>>>>> java.nio.channels.SocketChannel[connected local=/9.50.102.80:58371remote=/
>>>>> 9.50.102.79:50010]
>>>>> 2013-04-17 11:13:44,721 ERROR
>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
>>>>> 9.50.102.80:50010,
>>>>> storageID=DS-2038715921-9.50.102.80-50010-1366091297051, infoPort=50075,
>>>>> ipcPort=50020):DataXceiver
>>>>> java.net.SocketTimeoutException: 63000 millis timeout while waiting
>>>>> for channel to be ready for read. ch :
>>>>> java.nio.channels.SocketChannel[connected local=/9.50.102.80:58371remote=/
>>>>> 9.50.102.79:50010]
>>>>>         at
>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>>>>>         at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>>         at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>         at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:116)
>>>>>         at java.io.DataInputStream.readShort(DataInputStream.java:306)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:359)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
>>>>>         at java.lang.Thread.run(Thread.java:738)
>>>>> 2013-04-17 11:13:44,818 INFO
>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
>>>>> blk_8413378381769505032_1477 src: /9.50.102.81:35279 dest: /
>>>>> 9.50.102.80:50010
>>>>>
>>>>>
>>>>> 2. tasktracker log:
>>>>> 2013-04-23 11:48:26,783 INFO org.apache.hadoop.mapred.UserLogCleaner:
>>>>> Deleting user log path job_201304152248_0011
>>>>> 2013-04-30 14:48:15,506 ERROR org.apache.hadoop.mapred.TaskTracker:
Nitin Pawar