Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume, mail # user - Can Flume handle +100k events per seccond?


Copy link to this message
-
Re: Can Flume handle +100k events per seccond?
Juhani Connolly 2013-10-22, 06:49
Hi Bojan,

This is pretty old, but Mike did some testing on performance about an
year and a half ago:

https://cwiki.apache.org/confluence/display/FLUME/Flume+NG+Syslog+Performance+Test+2012-04-30

He was getting a max of 70k events/sec on a single machine.

Thing is, this is a result of a huge number of variables:
- Parallelization of flows allows better parallel processing
- Use of memory channel as opposed to a slower consistent channel.
- Possibly the source. I have no idea how you wrote your app
- Batching of events is important. Also are all events written to one
file? Or are they split over many? Every file is separately processed.
- Network congestion, your hdfs setup

Reaching 100k events per second is definitely possible. The resources
you need for it will vary significantly depending on how your setup is.
The more HA type features you use, the slower delivery is likely to
become. On the flipside, allowing fairly lax conditions that have a
small potential for data loss(on crash for example memory channel
contents are gone) will allow for close to 100k even on a single machine.

On 10/14/2013 09:00 PM, Bojan Kostić wrote:
> Hi, this is my first post here. But i play with flume for some time now.
> My question is how well flume scale?
> Can Flume ingest +100k events per seccond? Has anyone tried something
> like this?
>
> I created simple test and results are really slow.
> I wrote simple app with rpc client with fallback using flume sdk which
> is reading dummy log file.
> In the end i have two flume agents which are writing to hdfs.
> rollInterval = 60
> And in hdfs i get files with ~12MB.
>
> Do i need to use some complex topology with 3 tier?
> How many flume agents should write to hdfs?
>
> Best regards.