-Re: HDFS sink Bucketwriter working
Jagadish Bihani 2012-09-27, 06:58
Thanks for the reply Mike.
-- I have been following the user guide.
-- Actually I didn't get the expected behaviour with rolling as per the
(i.e. whenever I set rolling size to 10 MB and other rolling params to
0) I would expect that all
the incoming events will get into this single file until it reaches to
the size 10 MB and then
next events will go to next file and so on. But it simultaneously opens
at the same time which I thought related to params like transEventMax
-- Hence I started going through the source code and came across
few questions mentioned in the mail below. I had posted exceptions which
I got in other threads. But I think even if I get to know the inner working
of BucketWriter class that will help to solve my troubles.
On 09/27/2012 12:19 PM, Mike Percy wrote:
> Refer to the user guide here:
> Note the defaults for rollInterval, rollSize, and rollCount. If you
> want to use rollSize only, then you should set the others to 0.
> Also worth mentioning setting batchSize to something larger if you
> want to maximize your performance. I often go with 1000, depending on
> the application you may want to go lower or higher.
> On Wed, Sep 26, 2012 at 8:23 PM, Jagadish Bihani
> <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
> I had few doubts about HDFS sink Bucketwriter :
> -- How does HDFS's bucketwriter works? What criteria does it use
> to create
> another bucket?
> -- Creation of a file in HDFS is function of how many parameters ?
> I thought it is function of only rolling parameter(interval/size).
> But apparently
> it is also function 'batchsize' and 'txnEventMax'.
> -- If my requirement is that; If I get data from 10 Avro sinks to
> a single avro source and
> I want to dump it to HDFS with fixed size (say 64 MB) file. What
> should I do?
> Presently If I set it 64 MB rolling size; Bucketwriter creates
> many files ( I suspect it
> is = trxEventMax) and after a while it throws exceptions like 'too
> many open files'. (I have limit of
> 75000 open file descriptors).
> Information about above things will be of great help to tune flume
> properly for the requirements.