Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Flume-ng 1.3.x HDFSSink - override Hadoop default size for only one of my sinks


Copy link to this message
-
Re: Flume-ng 1.3.x HDFSSink - override Hadoop default size for only one of my sinks
Hi,

The HDFS sink writes back to HDFS, the blocksize is defined in your cluster (for new written files). If you use the hdfs sink, you should have a hdfs-site.xml which defines the blocksize (dfs.blocksize). So no, there is now way.

- Alex

On May 18, 2013, at 1:48 AM, Gary Malouf <[EMAIL PROTECTED]> wrote:

> If it is not clear, I meant to type default block size.
>
>
> On Fri, May 17, 2013 at 7:46 PM, Gary Malouf <[EMAIL PROTECTED]> wrote:
> Is there a way I can set the block size for files originating from a specific sink?  My use case is that I have a number of different protobuf messages that each get written to their own directories in HDFS.
>

--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB