Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Re: Error putting files in the HDFS


+
Jitendra Yadav 2013-10-08, 17:55
+
Basu,Indrashish 2013-10-08, 18:01
+
Mohammad Tariq 2013-10-08, 18:21
+
Basu,Indrashish 2013-10-08, 18:29
+
Jitendra Yadav 2013-10-08, 18:26
+
Basu,Indrashish 2013-10-08, 20:46
Copy link to this message
-
Re: Error putting files in the HDFS
Hi Indrashish,

Can you please check if your you DN is accessible by nn , and the other
this is hdfs-site.xml of DN is NN ip is given or not becoz if DN is up and
running the issue is DN is not able to attached to NN for getting register.

You can add DN in include file as well .

thanks
Vikas Srivastava
On Tue, Oct 8, 2013 at 1:46 PM, Basu,Indrashish <[EMAIL PROTECTED]> wrote:

> **
>
>
>
> Hi Tariq,
>
> Thanks for your help again.
>
> I tried deleting the old HDFS files and directories as you suggested , and
> then do the reformatting and starting all the nodes. However after running
> the dfsadmin report I am again seeing that datanode is not generated.
>
>
>
> root@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1# bin/hadoop
> dfsadmin -report
> Configured Capacity: 0 (0 KB)
> Present Capacity: 0 (0 KB)
> DFS Remaining: 0 (0 KB)
> DFS Used: 0 (0 KB)
> DFS Used%: �%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 0 (0 total, 0 dead)
>
>
>
> However when I typed jps, it is showing that datanode is up and running.
> Below are the datanode logs generated for the given time stamp. Can you
> kindly assist regarding this ?
>
>
>
> 2013-10-08 13:35:55,680 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /app/hadoop/tmp/dfs/data is not formatted.
> 2013-10-08 13:35:55,680 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Formatting ...
> 2013-10-08 13:35:55,814 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-10-08 13:35:55,820 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
> 2013-10-08 13:35:55,828 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-10-08 13:35:56,153 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-10-08 13:35:56,497 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-10-08 13:35:56,498 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50075
> webServer.getConnectors()[0].getLocalPort() returned 50075
> 2013-10-08 13:35:56,513 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50075
> 2013-10-08 13:35:56,514 INFO org.mortbay.log: jetty-6.1.14
> 2013-10-08 13:40:45,127 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2013-10-08 13:40:45,139 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2013-10-08 13:40:45,189 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=DataNode, port=50020
> 2013-10-08 13:40:45,198 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2013-10-08 13:40:45,201 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: starting
> 2013-10-08 13:40:45,201 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2013-10-08 13:40:45,202 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration > DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
> ipcPort=50020)
> 2013-10-08 13:40:45,206 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: starting
> 2013-10-08 13:40:45,207 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: starting
> 2013-10-08 13:40:45,234 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
> DS-863644283-127.0.1.1-50010-1381264845208 is assigned to data-node
> 10.227.56.195:50010
> 2013-10-08 13:40:45,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 10.227.56.195:50010,
> storageID=DS-863644283-127.0.1.1-50010-1381264845208, infoPort=50075,
> ipcPort=50020)In DataNode.run, data = FSDataset{
> dirpath='/app/hadoop/tmp/dfs/data/current'}