Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> set reduced block size for a specific file


Copy link to this message
-
Re: set reduced block size for a specific file

On Aug 27, 2011, at 12:42 PM, Ted Dunning wrote:

> There is no way to do this for standard Apache Hadoop.

Sure there is.

You can build a custom conf dir and point it to that.  You *always* have that option for client settable options as a work around for lack of features/bugs.

1. Copy $HADOOP_CONF_DIR or $HADOOP_HOME/conf to a dir
2. modify the hdfs-site.xml to have your new block size
3. Run the following:

HADOOP_CONF_DIR=mycustomconf hadoop dfs  -put file dir

Convenient?  No.  Doable? Definitely.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB