Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Query about "hadoop dfs -cat" in hadoop-0-0.20.2


Copy link to this message
-
Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2
On 06/17/2011 09:51 AM, Lemon Cheng wrote:
> Hi,
>
> Thanks for your reply.
> I am not sure that. How can I prove that?
Which is your dfs.tmp.dir and dfs.data.dir values?

You can check the DataNodes�s health with bin/slaves.sh jps | grep
Datanode | sort

Which is the output of bin/hadoop dfsadmin -report?

One recomendation that I could say you is to have at least 1 NameNode
and two Datanodes

regards
>
> I checked the localhost:50070, it shows 1 live node and 0 dead node.
> And  the log "hadoop-appuser-datanode-localhost.localdomain.log" shows:
> ************************************************************/
> 2011-06-17 19:59:38,658 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1 <http://127.0.0.1>
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2011-06-17 19:59:46,738 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2011-06-17 19:59:46,749 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
> 50010
> 2011-06-17 19:59:46,752 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open()
> is -1. Opening the listener on 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50075
> webServer.getConnectors()[0].getLocalPort() returned 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50075
> 2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14
> 2011-06-17 20:01:45,702 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> <http://SelectChannelConnector@0.0.0.0:50075>
> 2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=DataNode, port=50020
> 2011-06-17 20:01:45,751 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
> DatanodeRegistration(localhost.localdomain:50010,
> storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075,
> ipcPort=50020)
> 2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: starting
> 2011-06-17 20:01:45,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(127.0.0.1:50010 <http://127.0.0.1:50010>,
> storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075,
> ipcPort=50020)In DataNode.run, data =
> FSDataset{dirpath='/tmp/hadoop-appuser/dfs/data/current'}
> 2011-06-17 20:01:45,799 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: using
> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
> 2011-06-17 20:01:45,828 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
> blocks got processed in 11 msecs
> 2011-06-17 20:01:45,833 INFO
Marcos Lu�s Ort�z Valmaseda
  Software Engineer (UCI)
  http://marcosluis2186.posterous.com
  http://twitter.com/marcosluis2186
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB