Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: The minimum memory requirements to datanode and namenode?


+
sam liu 2013-05-13, 05:48
+
Nitin Pawar 2013-05-13, 05:51
+
sam liu 2013-05-13, 05:58
+
Nitin Pawar 2013-05-13, 06:00
+
sam liu 2013-05-13, 06:22
Copy link to this message
-
Re: The minimum memory requirements to datanode and namenode?
4GB memory on NN? this will run out of memory in few days.

You will need to make sure your NN has atleast more than double RAM of your
DNs if you have a miniature  cluster.
On Mon, May 13, 2013 at 11:52 AM, sam liu <[EMAIL PROTECTED]> wrote:

> I can issue a command 'hadoop dfsadmin -report', but it did not return any
> result for a long time. Also, I can open the NN UI(http://namenode:50070),
> but it is always keeping in the connecting status, and could not return any
> cluster statistic.
>
> The mem of NN:
>                   total       used       free
> Mem:          3834       3686        148
>
> After running a top command, I can see following process are taking up the
> memory: namenode, jobtracker, tasktracker, hbase, ...
>
> I can restart the cluster, and then the cluster will be healthy. But this
> issue will probably occur in a few days later. I think it's caused by
> lacking of free/available mem, but do not know how many extra
> free/available mem of node is required, besides the necessary mem for
> running datanode/tasktracker process?
>
>
>
>
> 2013/5/13 Nitin Pawar <[EMAIL PROTECTED]>
>
>> just one node not having memory does not mean your cluster is down.
>>
>> Can you see your hdfs health on NN UI?
>>
>> how much memory do you have on NN? if there are no jobs running on the
>> cluster then you can safely restart datanode and tasktracker.
>>
>> Also run a top command and figure out which processes are taking up the
>> memory and for what purpose?
>>
>>
>> On Mon, May 13, 2013 at 11:28 AM, sam liu <[EMAIL PROTECTED]> wrote:
>>
>>> Nitin,
>>>
>>> In my cluster, the tasktracker and datanode already have been launched,
>>> and are still running now. But the free/available mem of node3 now is just
>>> 167 mb, and do you think it's the reason why my hadoop is unhealthy now(it
>>> does not return result of command 'hadoop dfs -ls /')?
>>>
>>>
>>> 2013/5/13 Nitin Pawar <[EMAIL PROTECTED]>
>>>
>>>> Sam,
>>>>
>>>> There is no formula for determining how much memory one should give to
>>>> datanode and tasktracker. Ther formula is available for how many slots you
>>>> want to have on a machine.
>>>>
>>>> In my prior experience, we did give 512MB memory each to a datanode and
>>>> tasktracker.
>>>>
>>>>
>>>> On Mon, May 13, 2013 at 11:18 AM, sam liu <[EMAIL PROTECTED]>wrote:
>>>>
>>>>> For node3, the memory is:
>>>>>                    total       used       free     shared
>>>>> buffers     cached
>>>>> Mem:          3834       3666        167          0        187
>>>>> 1136
>>>>> -/+ buffers/cache:       2342       1491
>>>>> Swap:         8196          0       8196
>>>>>
>>>>> To a 3 nodes cluster as mine, what's the required minimum
>>>>> free/available memory for the datanode process and tasktracker process,
>>>>> without running any map/reduce task?
>>>>> Any formula to determine it?
>>>>>
>>>>>
>>>>> 2013/5/13 Rishi Yadav <[EMAIL PROTECTED]>
>>>>>
>>>>>> can you tell specs of node3. Even on a test/demo cluster, anything
>>>>>> below 4 GB ram makes the node almost inaccessible as per my experience.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, May 12, 2013 at 8:25 PM, sam liu <[EMAIL PROTECTED]>wrote:
>>>>>>
>>>>>>> Got some exceptions on node3:
>>>>>>> 1. datanode log:
>>>>>>> 2013-04-17 11:13:44,719 INFO
>>>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
>>>>>>> blk_2478755809192724446_1477 received exception
>>>>>>> java.net.SocketTimeoutException: 63000 millis timeout while waiting for
>>>>>>> channel to be ready for read. ch :
>>>>>>> java.nio.channels.SocketChannel[connected local=/9.50.102.80:58371remote=/
>>>>>>> 9.50.102.79:50010]
>>>>>>> 2013-04-17 11:13:44,721 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
>>>>>>> 9.50.102.80:50010,
>>>>>>> storageID=DS-2038715921-9.50.102.80-50010-1366091297051, infoPort=50075,
>>>>>>> ipcPort=50020):DataXceiver
>>>>>>> java.net.SocketTimeoutException: 63000 millis timeout while waiting
Nitin Pawar
+
shashwat shriparv 2013-05-13, 08:28
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB