Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> full disk woes


Hey HDFS gurus -

I searched around the list archives and jira, but didn't see an existing
discussion about this.

I'm having issues where HDFS in general has free space, however, certain
machines -- and certain disks -- become full. For example, below is disk
usage for an average looking node for this cluster, meaning the balancer
won't want to move data off this machine.

Originally, I wanted to alert when HDFS in general was getting full, but
that doesn't work in practice because certain machines fill up. And I can't
look at the per-machine stats, because individual disks fill up. I really
don't want to care about individual disks in HDFS but it seems they can
cause actual problems.

Does anyone else run into machines with overfull disks? Any tips on how to
avoid getting into this situation?
Configured capacity: 7.72 TB
Used: 6.43 TB

Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c1d0p1      65G   15G   46G  25% /
tmpfs                  31G     0   31G   0% /dev/shm
/dev/cciss/c0d0       275G  217G   45G  83% /data/disk000
/dev/cciss/c0d1       275G  219G   43G  84% /data/disk001
/dev/cciss/c0d2       275G  216G   46G  83% /data/disk002
/dev/cciss/c0d3       275G  220G   42G  85% /data/disk003
/dev/cciss/c0d4       275G  248G   14G  95% /data/disk004
/dev/cciss/c0d5       275G  219G   43G  84% /data/disk005
/dev/cciss/c0d6       275G  219G   43G  84% /data/disk006
/dev/cciss/c0d7       275G  213G   49G  82% /data/disk007
/dev/cciss/c0d8       275G  220G   42G  85% /data/disk008
/dev/cciss/c0d9       275G  208G   54G  80% /data/disk009
/dev/cciss/c0d10      275G  216G   46G  83% /data/disk010
/dev/cciss/c0d11      275G  218G   44G  84% /data/disk011
/dev/cciss/c0d12      275G  223G   39G  86% /data/disk012
/dev/cciss/c0d13      275G  221G   41G  85% /data/disk013
/dev/cciss/c0d14      275G  248G   14G  95% /data/disk014
/dev/cciss/c0d15      275G  219G   43G  84% /data/disk015
/dev/cciss/c0d16      275G  216G   46G  83% /data/disk016
/dev/cciss/c0d17      275G  216G   46G  83% /data/disk017
/dev/cciss/c0d18      275G  219G   43G  84% /data/disk018
/dev/cciss/c0d19      275G  220G   42G  84% /data/disk019
/dev/cciss/c0d20      275G  213G   49G  82% /data/disk020
/dev/cciss/c0d21      275G  215G   47G  83% /data/disk021
/dev/cciss/c0d22      275G  247G   15G  95% /data/disk022
/dev/cciss/c0d23      275G  218G   44G  84% /data/disk023
/dev/cciss/c0d24      275G  222G   40G  86% /data/disk024
/dev/cciss/c1d1p1     275G  184G   78G  71% /data/disk025
/dev/cciss/c1d2p1     275G  176G   86G  68% /data/disk026
/dev/cciss/c1d3p1     275G  178G   84G  68% /data/disk027
/dev/cciss/c1d4p1     275G  177G   85G  68% /data/disk028
/dev/cciss/c1d5p1     275G  179G   83G  69% /data/disk029
/dev/cciss/c1d6p1     275G  181G   81G  70% /data/disk030

--travis
+
Allen Wittenauer 2010-07-21, 21:01
+
Alex Loddengaard 2010-07-21, 21:09
+
Allen Wittenauer 2010-07-21, 21:15
+
Travis Crawford 2010-07-21, 21:47
+
Allen Wittenauer 2010-07-21, 22:06
+
Travis Crawford 2010-07-21, 23:47
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB