Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> set reduced block size for a specific file


Copy link to this message
-
Re: set reduced block size for a specific file
Hey Ben,

I just filed this JIRA to add this feature:
https://issues.apache.org/jira/browse/HDFS-2293

If anyone would like to implement this, I would be happy to review it.

Thanks a lot,
Aaron

--
Aaron T. Myers
Software Engineer, Cloudera

On Sat, Aug 27, 2011 at 4:08 PM, Ben Clay <[EMAIL PROTECTED]> wrote:

> I didn't even think of overriding the config dir.  Thanks for the tip!
>
> -Ben
>
>
> -----Original Message-----
> From: Allen Wittenauer [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, August 27, 2011 6:42 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: set reduced block size for a specific file
>
>
> On Aug 27, 2011, at 12:42 PM, Ted Dunning wrote:
>
> > There is no way to do this for standard Apache Hadoop.
>
>        Sure there is.
>
>        You can build a custom conf dir and point it to that.  You *always*
> have that option for client settable options as a work around for lack of
> features/bugs.
>
>        1. Copy $HADOOP_CONF_DIR or $HADOOP_HOME/conf to a dir
>        2. modify the hdfs-site.xml to have your new block size
>        3. Run the following:
>
> HADOOP_CONF_DIR=mycustomconf hadoop dfs  -put file dir
>
>        Convenient?  No.  Doable? Definitely.
>
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB