Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> How to estimate hadoop.tmp.dir disk space


Copy link to this message
-
Re: How to estimate hadoop.tmp.dir disk space
Do you have mounted drives on the disk like JBOD setup where you have
allocated few drives to hdfs?

check df -h on all the nodes you may get the mount which holds the logs or
any other information which is outside dfs may be full
On Wed, Dec 26, 2012 at 1:25 PM, centerqi hu <[EMAIL PROTECTED]> wrote:

> hi all
>  I encountered trouble
>
> Message: org.apache.hadoop.ipc.RemoteException: java.io.IOException: org.apache.hadoop.fs.FSError: java.io.IOException: No space left on device
>
>
>  hadoop dfsadmin -report
>
>     Configured Capacity: 44302785945600 (40.29 TB)
>     Present Capacity: 42020351946752 (38.22 TB)
>     DFS Remaining: 8124859072512 (7.39 TB)
>     DFS Used: 33895492874240 (30.83 TB)
>     DFS Used%: 80.66%
>     Under replicated blocks: 1687
>     Blocks with corrupt replicas: 0
>     Missing blocks: 0
>
> However, my hdfs space is adequate
>
> Change the size of the hadoop.tmp.dir error disappeared
> How to estimate hadoop.tmp.dir disk space?
>
> thx
> --
> [EMAIL PROTECTED]
>

--
Nitin Pawar
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB