Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Re: Error putting files in the HDFS


Copy link to this message
-
Re: Error putting files in the HDFS
Mohammad Tariq 2013-10-08, 18:21
You don't have any more space left in your HDFS. Delete some old data or
add additional storage.

Warm Regards,
Tariq
cloudfront.blogspot.com
On Tue, Oct 8, 2013 at 11:47 PM, Basu,Indrashish <[EMAIL PROTECTED]> wrote:

>
>
> Hi ,
>
> Just to update on this, I have deleted all the old logs and files from the
> /tmp and /app/hadoop directory, and restarted all the nodes, I have now 1
> datanode available as per the below information :
>
> Configured Capacity: 3665985536 (3.41 GB)
> Present Capacity: 24576 (24 KB)
>
> DFS Remaining: 0 (0 KB)
> DFS Used: 24576 (24 KB)
> DFS Used%: 100%
>
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> ------------------------------**-------------------
> Datanodes available: 1 (1 total, 0 dead)
>
> Name: 10.227.56.195:50010
> Decommission Status : Normal
> Configured Capacity: 3665985536 (3.41 GB)
> DFS Used: 24576 (24 KB)
> Non DFS Used: 3665960960 (3.41 GB)
> DFS Remaining: 0(0 KB)
> DFS Used%: 0%
> DFS Remaining%: 0%
> Last contact: Tue Oct 08 11:12:19 PDT 2013
>
>
> However when I tried putting the files back in HDFS, I am getting the same
> error as stated earlier. Do I need to clear some space for the HDFS ?
>
> Regards,
> Indrashish
>
>
>
> On Tue, 08 Oct 2013 14:01:19 -0400, Basu,Indrashish wrote:
>
>> Hi Jitendra,
>>
>> This is what I am getting in the datanode logs :
>>
>> 2013-10-07 11:27:41,960 INFO
>> org.apache.hadoop.hdfs.server.**common.Storage: Storage directory
>> /app/hadoop/tmp/dfs/data is not formatted.
>> 2013-10-07 11:27:41,961 INFO
>> org.apache.hadoop.hdfs.server.**common.Storage: Formatting ...
>> 2013-10-07 11:27:42,094 INFO
>> org.apache.hadoop.hdfs.server.**datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2013-10-07 11:27:42,099 INFO
>> org.apache.hadoop.hdfs.server.**datanode.DataNode: Opened info server at
>> 50010
>> 2013-10-07 11:27:42,107 INFO
>> org.apache.hadoop.hdfs.server.**datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.**Log4jLoggerAdapter(org.**mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.**HttpServer: Port
>> returned by webServer.getConnectors()[0].**getLocalPort() before open()
>> is -1. Opening the listener on 50075
>> 2013-10-07 11:27:42,633 INFO org.apache.hadoop.http.**HttpServer:
>> listener.getLocalPort() returned 50075
>> webServer.getConnectors()[0].**getLocalPort() returned 50075
>> 2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.**HttpServer: Jetty
>> bound to port 50075
>> 2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
>> 2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.**0:50075<http://SelectChannelConnector@0.0.0.0:50075>
>> 2013-10-07 11:31:29,843 INFO
>> org.apache.hadoop.metrics.jvm.**JvmMetrics: Initializing JVM Metrics
>> with processName=DataNode, sessionId=null
>> 2013-10-07 11:31:29,912 INFO
>> org.apache.hadoop.ipc.metrics.**RpcMetrics: Initializing RPC Metrics
>> with hostName=DataNode, port=50020
>> 2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 50020: starting
>> 2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 0 on 50020: starting
>> 2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 1 on 50020: starting
>> 2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 2 on 50020: starting
>> 2013-10-07 11:31:29,934 INFO
>> org.apache.hadoop.hdfs.server.**datanode.DataNode: dnRegistration >> DatanodeRegistration(tegra-**ubuntu:50010, storageID=, infoPort=50075,
>> ipcPort=50020)
>> 2013-10-07 11:31:29,971 INFO
>> org.apache.hadoop.hdfs.server.**datanode.DataNode: New storage id
>> DS-1027334635-127.0.1.1-50010-**1381170689938 is assigned to data-node
>> 10.227.56.195:50010
>> 2013-10-07 11:31:29,973 INFO