Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory


Copy link to this message
-
RE: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory
not yet.

please correct it.
 On Aug 27, 2013 11:39 PM, "Smith, Joshua D." <[EMAIL PROTECTED]>
wrote:

>  nn.domain is a place holder for the actual fully qualified hostname of
> my NameNode****
>
> snn.domain is a place holder for the actual fully qualified hostname of my
> StandbyNameNode.****
>
> ** **
>
> Of course both the NameNode and the StandbyNameNode are running exactly
> the same software with the same configuration since this is YARN. I’m not
> running and SecondaryName node.****
>
> ** **
>
> The actual fully qualified hostnames are on another network and my
> customer is sensitive about privacy, so that’s why I didn’t post the actual
> values.****
>
> ** **
>
> So, I think I have the equivalent of nn1,nn2 do I not?****
>
> ** **
>
> *From:* Azuryy Yu [mailto:[EMAIL PROTECTED]]
> *Sent:* Tuesday, August 27, 2013 11:32 AM
> *To:* [EMAIL PROTECTED]
> *Subject:* RE: HDFS Startup Failure due to dfs.namenode.rpc-address and
> Shared Edits Directory****
>
> ** **
>
> dfs.ha.namenodes.mycluster
> nn.domain,snn.domain****
>
> it should be:
> dfs.ha.namenodes.mycluster
> nn1,nn2****
>
> On Aug 27, 2013 11:22 PM, "Smith, Joshua D." <[EMAIL PROTECTED]>
> wrote:****
>
> Harsh-
>
> Here are all of the other values that I have configured.
>
> hdfs-site.xml
> -----------------
>
> dfs.webhdfs.enabled
> true
>
> dfs.client.failover.proxy.provider.mycluster
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>
> dfs.ha.automatic-falover.enabled
> true
>
> ha.zookeeper.quorum
> nn.domain:2181,snn.domain:2181,jt.domain:2181
>
> dfs.journalnode.edits.dir
> /opt/hdfs/data1/dfs/jn
>
> dfs.namenode.shared.edits.dir
> qjournal://nn.domain:8485;snn.domain:8485;jt.domain:8485/mycluster
>
> dfs.nameservices
> mycluster
>
> dfs.ha.namenodes.mycluster
> nn.domain,snn.domain
>
> dfs.namenode.rpc-address.mycluster.nn1
> nn.domain:8020
>
> dfs.namenode.rpc-address.mycluster.nn2
> snn.domain:8020
>
> dfs.namenode.http-address.mycluster.nn1
> nn.domain:50070
>
> dfs.namenode.http-address.mycluster.nn2
> snn.domain:50070
>
> dfs.name.dir
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
>
>
> core-site.xml
> ----------------
> fs.trash.interval
> 1440
>
> fs.trash.checkpoint.interval
> 1440
>
> fs.defaultFS
> hdfs://mycluster
>
> dfs.datanode.data.dir
>
> /hdfs/data1,/hdfs/data2,/hdfs/data3,/hdfs/data4,/hdfs/data5,/hdfs/data6,/hdfs/data7
>
>
> mapred-site.xml
> ----------------------
> mapreduce.framework.name
> yarn
>
> mapreduce.jobhistory.address
> jt.domain:10020
>
> mapreduce.jobhistory.webapp.address
> jt.domain:19888
>
>
> yarn-site.xml
> -------------------
> yarn.nodemanager.aux-service
> mapreduce.shuffle
>
> yarn.nodemanager.aux-services.mapreduce.shuffle.class
> org.apache.hadoop.mapred.ShuffleHandler
>
> yarn.log-aggregation-enable
> true
>
> yarn.nodemanager.remote-app-log-dir
> /var/log/hadoop-yarn/apps
>
> yarn.application.classpath
> $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib
> /*,$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$YARN_HOME/*,$YARN_HOME/lib/*
>
> yarn.resourcemanager.resource-tracker.address
> jt.domain:8031
>
> yarn.resourcemanager.address
> jt.domain:8032
>
> yarn.resourcemanager.scheduler.address
> jt.domain:8030
>
> yarn.resourcemanager.admin.address
> jt.domain:8033
>
> yarn.reesourcemanager.webapp.address
> jt.domain:8088
>
>
> These are the only interesting entries in my HDFS log file when I try to
> start the NameNode with "service hadoop-hdfs-namenode start".
>
> WARN org.apache.hadoop.hdfs.server.common.Util: Path
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name should be specified as a URI in
> configuration files. Please update hdfs configuration.
> WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image
> storage directory (dfs.namenode.name.dir) configured. Beware of data loss
> due to lack of redundant storage directories!
> INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false