Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> run hadoop pseudo-distribute examples failed


Copy link to this message
-
Re: run hadoop pseudo-distribute examples failed
On 05/18/2011 10:53 PM, 锟斤拷锟� wrote:
> Hi All,
> I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode
> following the hadoop user guide. After I run the 'start-all.sh', it
> seems the namenode can't connect to datanode.
> 'SSH localhost' is OK on my server. Someone advises to rm
> '/tmp/hadoop-XXXX' and format namenode again, but it doesn't work. And
> 'iptables -L' shows there is no firewall rules in my server:
>
>     test:/home/liyun2010# iptables -L
>     Chain INPUT (policy ACCEPT)
>     target prot opt source destination
>     Chain FORWARD (policy ACCEPT)
>     target prot opt source destination
>     Chain OUTPUT (policy ACCEPT)
>     target prot opt source destination
>
> Is there anyone can give me more advice? Thanks!
> Bellow is my namenode and datanode log files:
> liyun2010@test:~/hadoop-0.20.2/logs$
> <mailto:liyun2010@test:%7E/hadoop-0.20.2/logs$> cat
> hadoop-liyun2010-namenode-test.puppet.com.log
>
>     2011-05-19 10:58:25,938 INFO
>     org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>     /************************************************************
>     STARTUP_MSG: Starting NameNode
>     STARTUP_MSG: host = test.puppet.com/127.0.0.1
>     STARTUP_MSG: args = []
>     STARTUP_MSG: version = 0.20.2
>     STARTUP_MSG: build >     https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>     911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>     ************************************************************/
>     2011-05-19 10:58:26,197 INFO
>     org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
>     with hostName=NameNode, port=9000
>     2011-05-19 10:58:26,212 INFO
>     org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>     test.puppet.com/127.0.0.1:9000
>     2011-05-19 10:58:26,220 INFO
>     org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
>     with processName=NameNode, sessionId=null
>     2011-05-19 10:58:26,224 INFO
>     org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
>     Initializing NameNodeMeterics using context
>     object:org.apache.hadoop.metrics.spi.NullContext
>     2011-05-19 10:58:26,405 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>     fsOwner=liyun2010,users
>     2011-05-19 10:58:26,406 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>     supergroup=supergroup
>     2011-05-19 10:58:26,406 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>     isPermissionEnabled=true
>     2011-05-19 10:58:26,429 INFO
>     org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing
>     FSNamesystemMetrics using context
>     object:org.apache.hadoop.metrics.spi.NullContext
>     2011-05-19 10:58:26,434 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>     FSNamesystemStatusMBean
>     2011-05-19 10:58:26,511 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
>     2011-05-19 10:58:26,524 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Number of files
>     under construction = 1
>     2011-05-19 10:58:26,530 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Image file of size
>     920 loaded in 0 seconds.
>     2011-05-19 10:58:26,606 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid
>     opcode, reached end of edit log Number of transactions found 99
>     2011-05-19 10:58:26,606 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Edits file
>     /tmp/hadoop-liyun2010/dfs/name/current/edits of size 1049092 edits
>     # 99 loaded in 0 seconds.
>     2011-05-19 10:58:26,660 INFO
>     org.apache.hadoop.hdfs.server.common.Storage: Image file of size
>     920 saved in 0 seconds.
>     2011-05-19 10:58:26,810 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
>     loading FSImage in 505 msecs
>     2011-05-19 10:58:26,825 INFO
>     org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number
Why don't you change the dfs dir from /tmp to another directory, for
example /usr/share/hadoop/dfs?
Can you attach your configuration files to inspect them?

Regards

Marcos Lu锟斤拷s Ort锟斤拷z Valmaseda
 Software Engineer (Large-Scaled Distributed Systems)
 University of Information Sciences,
 La Habana, Cuba
 Linux User # 418229
 http://about.me/marcosortiz
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB