Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: DataNode not starting in slave machine


+
Vishnu Viswanath 2013-12-25, 14:11
+
kishore alajangi 2013-12-25, 14:28
Copy link to this message
-
Re: DataNode not starting in slave machine
Spelling of 'default' is probably the issue.
Chris
On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <[EMAIL PROTECTED]>
wrote:

> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
>     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
>     <property>
>         <name>fs.defualt.name</name>
>         <value>hdfs://master:9000</value>
>     </property>
>
>     <property>
>         <name>hadoop.tmp.dir</name>
>         <value>/home/vishnu/hadoop-tmp</value>
>     </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
>     <property>
>         <name>dfs.replication</name>
>         <value>2</value>
>     </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>master:9001</value>
>     </property>
> </configuration>
>
> any help,
>
>
+
Vishnu Viswanath 2013-12-25, 16:12
+
Vishnu Viswanath 2013-12-25, 15:56
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB