-Re: making file system block size bigger to improve hdfs performance ?
Steve Loughran 2011-10-10, 10:48
On 09/10/11 07:01, M. C. Srivas wrote:
> If you insist on HDFS, try using XFS underneath, it does a much better job
> than ext3 or ext4 for Hadoop in terms of how data is layed out on disk. But
> its memory footprint is alteast twice of that of ext3, so it will gobble up
> a lot more memory on your box.
How stable have you found XFS? I know people have worked a lot on ext4
and I am using it locally, even if something (VirtualBox) tell me off
for doing so. I know the Lustre people are using underneath their DFS,
and with wide use it does tend to get debugged by others before you use