Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop, mail # dev - making file system block size bigger to improve hdfs performance ?


+
Jinsong Hu 2011-10-03, 05:05
+
Niels Basjes 2011-10-03, 06:13
Copy link to this message
-
Re: making file system block size bigger to improve hdfs performance ?
Ted Dunning 2011-10-03, 14:43
The MapR system allocates files with 8K blocks internally, so I doubt that
any improvement that you see with a larger block size on HDFS is going to
matter much and it could seriously confuse your underlying file system.

The performance advantage for MapR has more to do with a better file system
design and much more direct data paths than it has to do with block size on
disk.  Changing the block size on the HDFS partition isn't going to help
that.

On Mon, Oct 3, 2011 at 5:05 AM, Jinsong Hu <[EMAIL PROTECTED]> wrote:

> Hi, There:
>  I just thought an idea. When we format the disk , the block size is
> usually 1K to 4K. For hdfs, the block size is usually 64M.
> I wonder if we change the raw file system's block size to something
> significantly bigger, say, 1M or 8M, will that improve
> disk IO performance for hadoop's hdfs ?
>  Currently, I noticed that mapr distribution uses mfs, its own file system.
> That resulted in 4 times performance gain in terms
> of disk IO. I just wonder if we tune the hosting os parameters, we can
> achieve better disk IO performance with just the regular
> apache hadoop distribution.
>  I understand that making the block size bigger can result in some disk
> space waste for small files. However, for disk dedicated
> for hdfs, where most of the files are very big, I just wonder if it is a
> good idea. Any body have any comment ?
>
> Jimmy
>
+
M. C. Srivas 2011-10-09, 06:01
+
Steve Loughran 2011-10-10, 10:48
+
M. C. Srivas 2011-10-10, 13:51
+
Brian Bockelman 2011-10-10, 14:10