Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: datanode can not start


Copy link to this message
-
Re: datanode can not start
Remove

<property>

       <name>dfs.datanode.address</name>

       <value>0.0.0.0:50011</value>

</property>
And try.
*Thanks & Regards    *


Shashwat Shriparv

On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <[EMAIL PROTECTED]> wrote:

> HI huang,
> *
> *
> *Some other service is running on the port or you did not stop the
> datanode service properly.*
> *
> *
> *Regards,*
> *Varun Kumar.P
> *
>
>
> On Wed, Jun 26, 2013 at 3:13 PM, ch huang <[EMAIL PROTECTED]> wrote:
>
>> i have running old cluster datanode,so it exist some conflict, i changed
>> default port, here is my hdfs-site.xml
>>
>>
>> <configuration>
>>
>>        <property>
>>
>>                 <name>dfs.name.dir</name>
>>
>>                 <value>/data/hadoopnamespace</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.data.dir</name>
>>
>>                 <value>/data/hadoopdata</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.address</name>
>>
>>                 <value>0.0.0.0:50011</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.permissions</name>
>>
>>                 <value>false</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.max.xcievers</name>
>>
>>                 <value>4096</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.webhdfs.enabled</name>
>>
>>                 <value>true</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.http.address</name>
>>
>>                 <value>192.168.10.22:50070</value>
>>
>>         </property>
>>
>> </configuration>
>>
>>
>> 2013-06-26 17:37:24,923 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = CH34/192.168.10.34
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build >> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:03:02 PDT 2012
>> ************************************************************/
>> 2013-06-26 17:37:25,335 INFO
>> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
>> set up for Hadoop, not re-installing.
>> 2013-06-26 17:37:25,421 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2013-06-26 17:37:25,429 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> 50011
>> 2013-06-26 17:37:25,430 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
>> global filtersafety
>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1. Opening the listener on 50075
>> 2013-06-26 17:37:25,519 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
>> exit, active threads is 0
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
>> down all async disk service threads...
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
>> disk service threads have been shut down.
>> 2013-06-26 17:37:25,620 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
>> Address already in use
>>         at sun.nio.ch.Net.bind(Native Method)
>>         at
>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB