-Re: making file system block size bigger to improve hdfs performance ?
The MapR system allocates files with 8K blocks internally, so I doubt that
any improvement that you see with a larger block size on HDFS is going to
matter much and it could seriously confuse your underlying file system.
The performance advantage for MapR has more to do with a better file system
design and much more direct data paths than it has to do with block size on
disk. Changing the block size on the HDFS partition isn't going to help
On Mon, Oct 3, 2011 at 5:05 AM, Jinsong Hu <[EMAIL PROTECTED]> wrote:
> Hi, There:
> I just thought an idea. When we format the disk , the block size is
> usually 1K to 4K. For hdfs, the block size is usually 64M.
> I wonder if we change the raw file system's block size to something
> significantly bigger, say, 1M or 8M, will that improve
> disk IO performance for hadoop's hdfs ?
> Currently, I noticed that mapr distribution uses mfs, its own file system.
> That resulted in 4 times performance gain in terms
> of disk IO. I just wonder if we tune the hosting os parameters, we can
> achieve better disk IO performance with just the regular
> apache hadoop distribution.
> I understand that making the block size bigger can result in some disk
> space waste for small files. However, for disk dedicated
> for hdfs, where most of the files are very big, I just wonder if it is a
> good idea. Any body have any comment ?