Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> RE: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory


Copy link to this message
-
RE: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory
dfs.ha.namenodes.mycluster
nn.domain,snn.domain

it should be:
dfs.ha.namenodes.mycluster
nn1,nn2
On Aug 27, 2013 11:22 PM, "Smith, Joshua D." <[EMAIL PROTECTED]>
wrote:

> Harsh-
>
> Here are all of the other values that I have configured.
>
> hdfs-site.xml
> -----------------
>
> dfs.webhdfs.enabled
> true
>
> dfs.client.failover.proxy.provider.mycluster
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>
> dfs.ha.automatic-falover.enabled
> true
>
> ha.zookeeper.quorum
> nn.domain:2181,snn.domain:2181,jt.domain:2181
>
> dfs.journalnode.edits.dir
> /opt/hdfs/data1/dfs/jn
>
> dfs.namenode.shared.edits.dir
> qjournal://nn.domain:8485;snn.domain:8485;jt.domain:8485/mycluster
>
> dfs.nameservices
> mycluster
>
> dfs.ha.namenodes.mycluster
> nn.domain,snn.domain
>
> dfs.namenode.rpc-address.mycluster.nn1
> nn.domain:8020
>
> dfs.namenode.rpc-address.mycluster.nn2
> snn.domain:8020
>
> dfs.namenode.http-address.mycluster.nn1
> nn.domain:50070
>
> dfs.namenode.http-address.mycluster.nn2
> snn.domain:50070
>
> dfs.name.dir
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
>
>
> core-site.xml
> ----------------
> fs.trash.interval
> 1440
>
> fs.trash.checkpoint.interval
> 1440
>
> fs.defaultFS
> hdfs://mycluster
>
> dfs.datanode.data.dir
>
> /hdfs/data1,/hdfs/data2,/hdfs/data3,/hdfs/data4,/hdfs/data5,/hdfs/data6,/hdfs/data7
>
>
> mapred-site.xml
> ----------------------
> mapreduce.framework.name
> yarn
>
> mapreduce.jobhistory.address
> jt.domain:10020
>
> mapreduce.jobhistory.webapp.address
> jt.domain:19888
>
>
> yarn-site.xml
> -------------------
> yarn.nodemanager.aux-service
> mapreduce.shuffle
>
> yarn.nodemanager.aux-services.mapreduce.shuffle.class
> org.apache.hadoop.mapred.ShuffleHandler
>
> yarn.log-aggregation-enable
> true
>
> yarn.nodemanager.remote-app-log-dir
> /var/log/hadoop-yarn/apps
>
> yarn.application.classpath
> $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib
> /*,$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$YARN_HOME/*,$YARN_HOME/lib/*
>
> yarn.resourcemanager.resource-tracker.address
> jt.domain:8031
>
> yarn.resourcemanager.address
> jt.domain:8032
>
> yarn.resourcemanager.scheduler.address
> jt.domain:8030
>
> yarn.resourcemanager.admin.address
> jt.domain:8033
>
> yarn.reesourcemanager.webapp.address
> jt.domain:8088
>
>
> These are the only interesting entries in my HDFS log file when I try to
> start the NameNode with "service hadoop-hdfs-namenode start".
>
> WARN org.apache.hadoop.hdfs.server.common.Util: Path
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name should be specified as a URI in
> configuration files. Please update hdfs configuration.
> WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image
> storage directory (dfs.namenode.name.dir) configured. Beware of data loss
> due to lack of redundant storage directories!
> INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Configured NNs:
> ((there's a blank line here implying no configured NameNodes!))
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> Java.io.IOException: Invalid configuration: a shared edits dir must not be
> specified if HA is not enabled.
> FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in
> namenode join
> Java.io.IOException: Invalid configuration: a shared edits dir must not be
> specified if HA is not enabled.
>
> I don't like the blank line for Configured NNs. Not sure why it's not
> finding them.
>
> If I try the command "hdfs zkfc -formatZK" I get the following:
> Exception in thread "main"
> org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for
> this namenode.
>
> -----Original Message-----
> From: Smith, Joshua D. [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 27, 2013 7:17 AM
> To: [EMAIL PROTECTED]
> Subject: RE: HDFS Startup Failure due to dfs.namenode.rpc-address and
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB