Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> making file system block size bigger to improve hdfs performance ?


Copy link to this message
-
Re: making file system block size bigger to improve hdfs performance ?
On 09/10/11 07:01, M. C. Srivas wrote:

> If you insist on HDFS, try using XFS underneath, it does a much better job
> than ext3 or ext4 for Hadoop in terms of how data is layed out on disk. But
> its memory footprint is alteast twice of that of ext3, so it will gobble up
> a lot more memory on your box.

How stable have you found XFS? I know people have worked a lot on ext4
and I am using it locally, even if something (VirtualBox) tell me off
for doing so. I know the Lustre people are using underneath their DFS,
and with wide use it does tend to get debugged by others before you use
your data.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB