Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> stuck in safe mode after restarting dfs after found dead node


Copy link to this message
-
Re: stuck in safe mode after restarting dfs after found dead node
If the files are gone forever you should run:

hadoop fsck -delete /

To acknowledge they have moved on from existence. Otherwise things
that attempt to read this files will, to put it in a technical way,
BARF.

On Fri, Jul 13, 2012 at 12:22 PM, Juan Pino <[EMAIL PROTECTED]> wrote:
> Thank you for your reply. I ran that command before and it works fine but
> hadoop fs -ls diplays the list of files in the user's directory but then
> hangs for quite a while (~ 10 minutes) before
> handing the command line prompt back, then if I rerun the same command
> there is no problem. That is why I would like to be able to leave safe mode
> automatically (at least I think it's related).
> Also, in the hdfs web page, clicking on the Live Nodes or Dead Nodes links
> hangs forever but I am able to browse the file
> system without any problem with the browser.
> There is no error in the logs.
> Please let me know what sort of details I can provide to help resolve this
> issue.
>
> Best,
>
> Juan
>
> On Fri, Jul 13, 2012 at 4:10 PM, Edward Capriolo <[EMAIL PROTECTED]>wrote:
>
>> If the datanode is not coming back you have to explicitly tell hadoop
>> to leave safemode.
>>
>> http://hadoop.apache.org/common/docs/r0.17.2/hdfs_user_guide.html#Safemode
>>
>> hadoop dfsadmin -safemode leave
>>
>>
>> On Fri, Jul 13, 2012 at 9:35 AM, Juan Pino <[EMAIL PROTECTED]>
>> wrote:
>> > Hi,
>> >
>> > I can't get HDFS to leave safe mode automatically. Here is what I did:
>> >
>> > -- there was a dead node
>> > -- I stopped dfs
>> > -- I restarted dfs
>> > -- Safe mode wouldn't leave automatically
>> >
>> > I am using hadoop-1.0.2
>> >
>> > Here are the logs:
>> >
>> > end of hadoop-hadoop-namenode.log (attached):
>> >
>> > 2012-07-13 13:22:29,372 INFO org.apache.hadoop.hdfs.StateChange: STATE*
>> Safe
>> > mode ON.
>> > The ratio of reported blocks 0.9795 has not reached the threshold 0.9990.
>> > Safe mode will be turned off automatically.
>> > 2012-07-13 13:22:29,375 INFO org.apache.hadoop.hdfs.StateChange: STATE*
>> Safe
>> > mode extension entered.
>> > The ratio of reported blocks 0.9990 has reached the threshold 0.9990.
>> Safe
>> > mode will be turned off automatically in 29 seconds.
>> > 2012-07-13 13:22:29,375 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK*
>> > NameSystem.processReport: from , blocks: 3128, processing time: 4 msecs
>> > 2012-07-13 13:31:29,201 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
>> > NameSystem.processReport: discarded non-initial block report from because
>> > namenode still in startup phase
>> >
>> > Any help would be greatly appreciated.
>> >
>> > Best,
>> >
>> > Juan
>> >
>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB