-du reserved can ran into problems with reserved disk capacity by tune2fs
Alexander Fahlke 2013-02-12, 14:33
I'm using hadoop-0.20.2 on Debian Squeeze and ran into the same confusion
as many others with the parameter for dfs.datanode.du.reserved.
One day some data nodes got out of disk errors although there was space
left on the disks.
The following values are rounded to make the problem more clear:
- the disk for the DFS data has 1000GB and only one Partition (ext3) for
- you plan to set the dfs.datanode.du.reserved to 20GB
- the reserved reserved-blocks-percentage by tune2fs is 5% (the default)
That gives all users, except root, 5% less capacity that they can use.
Although the System reports the total of 1000GB as usable for all users via
The hadoop-deamons are not running as root.
If i read it right, than hadoop get's the free capacity via df.
Starting in /src/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
on line 350:
going to /src/core/org/apache/hadoop/fs/DF.java which says:
"Filesystem disk space usage statistics. Uses the unix 'df' program"
When you have 5% reserved by tune2fs (in our case 50GB) and you give
dfs.datanode.du.reserved only 20GB, than you can possibly ran into out of
disk errors that hadoop can't handle.
In this case you must add the planned 20GB du reserved to the reserved
capacity by tune2fs. This results in (at least) 70GB
for dfs.datanode.du.reserved in my case.
1. The documentation must be clear at this point to avoid this problem.
2. Hadoop could check for reserved space by tune2fs (or other tools) and
add this value to the dfs.datanode.du.reserved parameter.