Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - Re: The minimum memory requirements to datanode and namenode?


+
sam liu 2013-05-13, 05:48
Copy link to this message
-
Re: The minimum memory requirements to datanode and namenode?
Nitin Pawar 2013-05-13, 05:51
Sam,

There is no formula for determining how much memory one should give to
datanode and tasktracker. Ther formula is available for how many slots you
want to have on a machine.

In my prior experience, we did give 512MB memory each to a datanode and
tasktracker.
On Mon, May 13, 2013 at 11:18 AM, sam liu <[EMAIL PROTECTED]> wrote:

> For node3, the memory is:
>                    total       used       free     shared    buffers
> cached
> Mem:          3834       3666        167          0        187       1136
> -/+ buffers/cache:       2342       1491
> Swap:         8196          0       8196
>
> To a 3 nodes cluster as mine, what's the required minimum free/available
> memory for the datanode process and tasktracker process, without running
> any map/reduce task?
> Any formula to determine it?
>
>
> 2013/5/13 Rishi Yadav <[EMAIL PROTECTED]>
>
>> can you tell specs of node3. Even on a test/demo cluster, anything below
>> 4 GB ram makes the node almost inaccessible as per my experience.
>>
>>
>>
>> On Sun, May 12, 2013 at 8:25 PM, sam liu <[EMAIL PROTECTED]> wrote:
>>
>>> Got some exceptions on node3:
>>> 1. datanode log:
>>> 2013-04-17 11:13:44,719 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
>>> blk_2478755809192724446_1477 received exception
>>> java.net.SocketTimeoutException: 63000 millis timeout while waiting for
>>> channel to be ready for read. ch :
>>> java.nio.channels.SocketChannel[connected local=/9.50.102.80:58371remote=/
>>> 9.50.102.79:50010]
>>> 2013-04-17 11:13:44,721 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
>>> 9.50.102.80:50010,
>>> storageID=DS-2038715921-9.50.102.80-50010-1366091297051, infoPort=50075,
>>> ipcPort=50020):DataXceiver
>>> java.net.SocketTimeoutException: 63000 millis timeout while waiting for
>>> channel to be ready for read. ch :
>>> java.nio.channels.SocketChannel[connected local=/9.50.102.80:58371remote=/
>>> 9.50.102.79:50010]
>>>         at
>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>>>         at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>         at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>         at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:116)
>>>         at java.io.DataInputStream.readShort(DataInputStream.java:306)
>>>         at
>>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:359)
>>>         at
>>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
>>>         at java.lang.Thread.run(Thread.java:738)
>>> 2013-04-17 11:13:44,818 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
>>> blk_8413378381769505032_1477 src: /9.50.102.81:35279 dest: /
>>> 9.50.102.80:50010
>>>
>>>
>>> 2. tasktracker log:
>>> 2013-04-23 11:48:26,783 INFO org.apache.hadoop.mapred.UserLogCleaner:
>>> Deleting user log path job_201304152248_0011
>>> 2013-04-30 14:48:15,506 ERROR org.apache.hadoop.mapred.TaskTracker:
>>> Caught exception: java.io.IOException: Call to node1/9.50.102.81:9001failed on local exception: java.io.IOException: Connection reset by peer
>>>         at org.apache.hadoop.ipc.Client.wrapException(Client.java:1144)
>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1112)
>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>>         at org.apache.hadoop.mapred.$Proxy2.heartbeat(Unknown Source)
>>>         at
>>> org.apache.hadoop.mapred.TaskTracker.transmitHeartBeat(TaskTracker.java:2008)
>>>         at
>>> org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1802)
>>>         at
>>> org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:2654)
>>>         at
>>> org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3909)
>>> Caused by: java.io.IOException: Connection reset by peer
>>>         at sun.nio.ch.FileDispatcher.read0(Native Method)
Nitin Pawar
+
sam liu 2013-05-13, 05:58
+
Nitin Pawar 2013-05-13, 06:00
+
sam liu 2013-05-13, 06:22
+
Nitin Pawar 2013-05-13, 06:53
+
shashwat shriparv 2013-05-13, 08:28