Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> run hadoop pseudo-distribute examples failed


Copy link to this message
-
Re: run hadoop pseudo-distribute examples failed
On 05/19/2011 10:35 PM, 锟斤拷锟� wrote:
> Hi Marcos,
> Thanks for your reply.
> The temporary directory '/tmp/hadoop-xxx' is defined in hadoop core
> jar's configuration file "*core-default.xml*". Do u think this may
> cause the failure? Bellow is the detail config:
>
>     <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/tmp/hadoop-${user.name}</value>
>     <description>A base for other temporary directories.</description>
>     </property>
>
> And what's the other config files do u need? Almostly, I didn't modify
> any configuration after downloading the hadoop-0.20.2 files, I think
> those configuration are all the default values.
Yes, those are the default values, but I think that you can test with
another directory because this is a temporary directory , and it can be
erased easy.
For example, when you use the CDH3, the default value there is
/var/lib/hadoop-0.20.2/cache/${user.name}, which is more convenient.
Of course, it's a recommendation.
You can search the Lars Francke's Blog (http://blog.lars-francke.de/)
where he did a excellent work explaining the manual installation of a
Hadoop Cluster.

Regards

> 2011-05-20
> ------------------------------------------------------------------------
> 锟斤拷锟�
> ------------------------------------------------------------------------
> *锟斤拷锟斤拷锟剿o拷* Marcos Ortiz
> *锟斤拷锟斤拷时锟戒:* 2011-05-19 20:40:06
> *锟秸硷拷锟剿o拷* mapreduce-user
> *锟斤拷锟酵o拷* 锟斤拷锟�
> *锟斤拷锟解:* Re: run hadoop pseudo-distribute examples failed
> On 05/18/2011 10:53 PM, 锟斤拷锟� wrote:
>> Hi All,
>> I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode
>> following the hadoop user guide. After I run the 'start-all.sh', it
>> seems the namenode can't connect to datanode.
>> 'SSH localhost' is OK on my server. Someone advises to rm
>> '/tmp/hadoop-XXXX' and format namenode again, but it doesn't work.
>> And 'iptables -L' shows there is no firewall rules in my server:
>>
>>     test:/home/liyun2010# iptables -L
>>     Chain INPUT (policy ACCEPT)
>>     target prot opt source destination
>>     Chain FORWARD (policy ACCEPT)
>>     target prot opt source destination
>>     Chain OUTPUT (policy ACCEPT)
>>     target prot opt source destination
>>
>> Is there anyone can give me more advice? Thanks!
>> Bellow is my namenode and datanode log files:
>> liyun2010@test:~/hadoop-0.20.2/logs$
>> <mailto:liyun2010@test:%7E/hadoop-0.20.2/logs$> cat
>> hadoop-liyun2010-namenode-test.puppet.com.log
>>
>>     2011-05-19 10:58:25,938 INFO
>>     org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>     /************************************************************
>>     STARTUP_MSG: Starting NameNode
>>     STARTUP_MSG: host = test.puppet.com/127.0.0.1
>>     STARTUP_MSG: args = []
>>     STARTUP_MSG: version = 0.20.2
>>     STARTUP_MSG: build >>     https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
>>     -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>>     ************************************************************/
>>     2011-05-19 10:58:26,197 INFO
>>     org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC
>>     Metrics with hostName=NameNode, port=9000
>>     2011-05-19 10:58:26,212 INFO
>>     org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>>     test.puppet.com/127.0.0.1:9000
>>     2011-05-19 10:58:26,220 INFO
>>     org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM
>>     Metrics with processName=NameNode, sessionId=null
>>     2011-05-19 10:58:26,224 INFO
>>     org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
>>     Initializing NameNodeMeterics using context
>>     object:org.apache.hadoop.metrics.spi.NullContext
>>     2011-05-19 10:58:26,405 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>     fsOwner=liyun2010,users
>>     2011-05-19 10:58:26,406 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>     supergroup=supergroup
>>     2011-05-19 10:58:26,406 INFO
>>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
Marcos Lu锟斤拷s Ort锟斤拷z Valmaseda
 Software Engineer (Large-Scaled Distributed Systems)
 University of Information Sciences,
 La Habana, Cuba
 Linux User # 418229
 http://about.me/marcosortiz
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB