Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: hadoop 2.0.5 datanode heartbeat issue


Copy link to this message
-
Re: hadoop 2.0.5 datanode heartbeat issue
Hi,

You may had some problem during hdfs start-up which causes this issue.

Thank
Jitendra

On 8/31/13, orahad bigdata <[EMAIL PROTECTED]> wrote:
> Thanks Jitendra,
>
> I have restarted my DataNode and suddenly it works for me :) now it's
> connected to both NN's.
>
> Do you know why this issue occurred?
>
> Thanks
>
>
>
> On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
> <[EMAIL PROTECTED]>wrote:
>
>> Hi,
>>
>> However your conf looks fine but I would say that you should  restart
>> your DN once and check your NN weburl.
>>
>> Regards
>> Jitendra
>>
>> On 8/31/13, orahad bigdata <[EMAIL PROTECTED]> wrote:
>> > here is my conf files.
>> >
>> > -----------core-site.xml-----------
>> > <configuration>
>> > <property>
>> >   <name>fs.defaultFS</name>
>> >   <value>hdfs://orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.journalnode.edits.dir</name>
>> >   <value>/u0/journal/node/local/data</value>
>> > </property>
>> > </configuration>
>> >
>> > ------------ hdfs-site.xml-------------
>> > <configuration>
>> > <property>
>> >   <name>dfs.nameservices</name>
>> >   <value>orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.ha.namenodes.orahadoop</name>
>> > <value>node1,node2</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>> >   <value>clone1:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>> >   <value>clone2:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
>> >   <value>clone1:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
>> >   <value>clone2:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.shared.edits.dir</name>
>> >
>> > <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>> >
>> >
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>> > </property>
>> > </configuration>
>> >
>> > --------- mapred-site.xml -------------
>> >
>> > <configuration>
>> > <property>
>> >     <name>mapreduce.framework.name</name>
>> >     <value>classic</value>
>> >   </property>
>> > </configuration>
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <[EMAIL PROTECTED]>
>> wrote:
>> >
>> >> Another possibility I can imagine is that the old configuration
>> >> property "fs.default.name" is still in your configuration with a
>> >> single NN's host+ip as its value. In that case this bad value may
>> >> overwrite the value of fs.defaultFS.
>> >>
>> >> It may be helpful if you can post your configurations.
>> >>
>> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <[EMAIL PROTECTED]>
>> >> wrote:
>> >> > Thanks Jing,
>> >> >
>> >> > I'm using same configuration files at datanode side.
>> >> >
>> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >> >
>> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >> >
>> >> > Thanks
>> >> > On 8/30/13, Jing Zhao <[EMAIL PROTECTED]> wrote:
>> >> >> You may need to make sure the configuration of your DN has also
>> >> >> been
>> >> >> updated for HA. If your DN's configuration still uses the old URL
>> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> >> connect to that NN.
>> >> >>
>> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
>> [EMAIL PROTECTED]>
>> >> >> wrote:
>> >> >>> Hi All,
>> >> >>>
>> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I
>> >> >>> did
>> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >> >>> for
>> >> >>> both the NN, I saw some strange situation where my DN connected to
>> >> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >> >>>
>> >> >>> please guide.
>> >> >>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB