Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> FSCK / -move and -delete FAIL


Copy link to this message
-
Re: FSCK / -move and -delete FAIL
If you have decided to take the 'fsck -move' route, then you need to
exit safemode first:

# Exit safemode first. After this, NN will not prevent writes/updates.
sudo -u hdfs hadoop dfsadmin -safemode leave
# Move the files to /lost+found (newdir) next.
sudo -u hdfs hadoop fsck -move

On Wed, May 16, 2012 at 1:48 AM, Terry Healy <[EMAIL PROTECTED]> wrote:
> Running 1.0.2 on cluster with 10 datanodes. After running stop-all.sh
> and start-all.sh following the addition of a new datanode, the NN stays
> in SafeMode.
>
> hadoop fsck /  reports several MISSING and CORRUPT blocks, but I have
> not been able to continue after trying both the -move and -delete
> options. When I run the -move option, it reports:
>
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create
> directory /lost+found. Name node is in safe mode.
>
> In what directory is it trying to create the /lost+found?
>
> What can I do to purge the errors and get HDFS running again?
>
> Thanks,
>
> Terry

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB