Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Chukwa >> mail # user >> collector not closing files larger than 64MB


Copy link to this message
-
Re: collector not closing files larger than 64MB
Howdy.

There isn't such a check; if there's documentation somewhere that
suggests there is, let us know where and we can fix it. In general,
the goal is to have .done files as large as possible while compatible
with SLAs; there wasn't any intent to have them only be one block
long.

--Ari

On Tue, Jul 12, 2011 at 3:57 PM, Himanshu Gahlot
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> We have a system where the .chukwa files produced are larger than
> 64MB, but I do not see them being closed by collector and convert to
> .done file. They are closing only at the scheduled time and hence are
> greater than the block size (64MB). I do not see a check to close
> files larger than a block size in SeqFileWriter class. Where is this
> check made in the code ?
>
> Thanks,
> Himanshu
>

--
Ari Rabkin [EMAIL PROTECTED]
UC Berkeley Computer Science Department
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB