Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> HELP - Problem in setting up Hadoop - Multi-Node Cluster


+
Guruprasad B 2012-02-08, 12:06
+
Robin Mueller-Bady 2012-02-08, 19:36
+
hadoop hive 2012-02-09, 10:11
Copy link to this message
-
Re: HELP - Problem in setting up Hadoop - Multi-Node Cluster
Please use jdk 6 latest.

best,
 Alex

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 9, 2012, at 11:11 AM, hadoop hive wrote:

> did you make check the ssh between localhost means its should be ssh password less between localhost
>
> public-key =authorized_key
>
> On Thu, Feb 9, 2012 at 1:06 AM, Robin Mueller-Bady <[EMAIL PROTECTED]> wrote:
> Dear Guruprasad,
>
> it would be very helpful to provide details from your configuration files as well as more details on your setup.
> It seems to be that the connection from slave to master cannot be established ("Connection reset by peer").
> Do you use a virtual environment, physical master/slaves or all on one machine ?
> Please paste also the output of "kingul2" namenode logs.
>
> Regards,
>
> Robin
>
>
> On 02/08/12 13:06, Guruprasad B wrote:
>> Hi,
>>
>> I am Guruprasad from Bangalore (India). I need help in setting up hadoop
>> platform. I am very much new to Hadoop Platform.
>>
>> I am following the below given articles and I was able to set up
>> "Single-Node Cluster
>> "
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#what-we-want-to-do
>>
>> Now I am trying to set up "
>> Multi-Node Cluster" by following the below given
>> article.
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
>>
>>
>> Below given is my setup:
>> Hadoop : hadoop_0.20.2
>> Linux: Ubuntu Linux 10.10
>> Java: java-7-oracle
>>
>>
>> I have successfully reached till the topic "Starting the multi-node
>> cluster" in the above given article.
>> When I start the HDFS/MapReduce daemons it is getting started and going
>> down immediately both in master & slave as well,
>> please have a look at the below logs,
>>
>> hduser@kinigul2:/usr/local/hadoop$ bin/start-dfs.sh
>> starting namenode, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-kinigul2.out
>> master: starting datanode, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-kinigul2.out
>> slave: starting datanode, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-guruL.out
>> master: starting secondarynamenode, logging to
>> /usr/local/hadoop/bin/../logs/hadoop-hduser-secondarynamenode-kinigul2.out
>>
>> hduser@kinigul2:/usr/local/hadoop$ jps
>> 6098 DataNode
>> 6328 Jps
>> 5914 NameNode
>> 6276 SecondaryNameNode
>>
>> hduser@kinigul2:/usr/local/hadoop$ jps
>> 6350 Jps
>>
>>
>> I am getting below given error in slave logs:
>>
>> 2012-02-08 21:04:01,641 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
>> to master/
>> 16.150.98.62:54310
>>  failed on local exception:
>> java.io.IOException: Connection reset by peer
>>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>     at $Proxy4.getProtocolVersion(Unknown Source)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
>>     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
>>     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
>> Caused by: java.io.IOException: Connection reset by peer
>>     at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>     at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
+
15club.cn 2012-02-09, 13:51
+
Anil Gupta 2012-02-09, 19:42
+
anil gupta 2012-02-09, 19:45
+
Guruprasad B 2012-02-10, 09:43
+
anil gupta 2012-02-11, 01:31
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB