Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> namenode directory failure question


Copy link to this message
-
namenode directory failure question
Hello all,

We have our dfs.name.dir configured to write to two local and one NFS
directories.  The NFS server in question had to be restarted a couple
days back and that copy of the namenode data fell behind as a result.  
As I understand it, restarting hadoop will take the most recent copy of
the namenode data, in this case one of the two local copies, and write
that to all three locations going forward.  So that solves the problem.

My question is this, is there a way to get the NFS copy of the data back
in sync without having to shut down and restart the namenode? I'd prefer
to not take an outage if I can help it.

Thanks.

--Brennon
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB