Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Diskspace usage


Quick question on the way hadoop is using the disk space.

Let's say I have 8 nodes. 7 of them with a 2T disk, and one with a 256GB.

Is hadoop going to use the 256GB until it's full, then continue with
the other nodes only but keeping the 256GB live? Or will it bring the
256GB node down when it will be full (like for failures) and continue
with the 7 remaining nodes?

To summarize, is hadoop taking care of the drive size?