Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> what happen in my hadoop cluster?


Copy link to this message
-
RE: Re: what happen in my hadoop cluster?
Can you check the name node logs what is going on with name node?

When we start the name node, it will be in the while initializing and after
some time it will be turned off automatically. If it going to safe mode with
any other reason, we can find out from the name node logs.

Devaraj K

  _____

From: 周俊清 [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 27, 2011 1:08 PM
To: [EMAIL PROTECTED]
Subject: Re:Re: what happen in my hadoop cluster?

Yes,I can see all the data node from web
page:http://dn224.pengyun.org:50070/dfsnodelist.jsp?

--
----------------------------

周俊清

[EMAIL PROTECTED]
在 2011-07-27 15:30:37,"Harsh J" <[EMAIL PROTECTED]> 写道:
>Are all your DataNodes up?
>
>2011/7/27 周俊清 <[EMAIL PROTECTED]>:
>> hello everyone,
>>     I got an exception from my jobtracker's log file as follow:
>> 2011-07-27 01:58:04,197 INFO org.apache.hadoop.mapred.JobTracker:
Cleaning
>> up the system directory
>> 2011-07-27 01:58:04,230 INFO org.apache.hadoop.mapred.JobTracker: problem
>> cleaning system directory:
>> hdfs://dn224.pengyun.org:56900/home/hadoop/hadoop-tmp203/mapred/system
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
>> /home/hadoop/hadoop-tmp203/mapred/system. Name node is in safe mode.
>> The ratio of reported blocks 0.2915 has not reached the threshold 0.9990.
>> Safe mode will be turned off automatically.
>>     at
>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesys
tem.java:1851)
>>     at
>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java
:1831)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:691)
>>     ……
>> and
>>    the log message of namenode:
>> 2011-07-27 00:00:00,219 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 1 on 56900, call delete(/home/hadoop/hadoop-tmp203/mapred/system,
>> true) from 192.168.1.224:5131
>> 2: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException:
Cannot
>> delete /home/hadoop/hadoop-tmp203/mapred/system. Name node is in safe
mode.
>> The ratio of reported blocks 0.2915 has not reached the threshold 0.9990.
>> Safe mode will be turned off automatically.
>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
>> /home/hadoop/hadoop-tmp203/mapred/system. Name node is in safe mode.
>> The ratio of reported blocks 0.2915 has not reached the threshold 0.9990.
>> Safe mode will be turned off automatically.
>>     at
>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesys
tem.java:1851)
>>     at
>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java
:1831)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:691)
>>     at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>     at
>>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:396)
>>     at
>>
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
va:1059)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
>>
>>  It means,I think,that the namenode is always being in the safe mode,so
what
>> can i do about these exception.Anyone who can tell me why?I don't find
the
>> file "/home/hadoop/hadoop-tmp203/mapred/system" in my system.The
exception
>> upon which appearing in the log file are repeated,even when I restart my
>> hadoop.
>>    thanks for your concern.
>>
>>
>> ----------------------------
>> Junqing Zhou
>> [EMAIL PROTECTED]
>>
>>
>>
>>
>
>
>
>--
>Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB