Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> seeing poor consumer performance in 0.7.2


Copy link to this message
-
Re: seeing poor consumer performance in 0.7.2
Some of the reasons a consumer is slow are -
1. Small fetch size
2. Expensive message processing

Are you processing the received messages in the consumer ? Have you
tried running console consumer for this topic and see how it performs
?

Thanks,
Neha

On Sun, Apr 21, 2013 at 1:59 AM, Andrew Neilson <[EMAIL PROTECTED]> wrote:
> I am currently running a deployment with 3 brokers, 3 ZK, 3 producers, 2
> consumers, and 15 topics. I should first point out that this is my first
> project using Kafka ;). The issue I'm seeing is that the consumers are only
> processing about 15 messages per second from what should be the largest
> topic it is consuming (we're sending 200-400 ~300 byte messages per second
> to this topic). I should note that I'm using a high level ZK consumer and
> ZK 3.4.3.
>
> I have a strong feeling I have not configured things properly so I could
> definitely use some guidance. Here is my broker configuration:
>
> brokerid=1
> port=9092
> socket.send.buffer=1048576
> socket.receive.buffer=1048576
> max.socket.request.bytes=104857600
> log.dir=/home/kafka/data
> num.partitions=1
> log.flush.interval=10000
> log.default.flush.interval.ms=1000
> log.default.flush.scheduler.interval.ms=1000
> log.retention.hours=168
> log.file.size=536870912
> enable.zookeeper=true
> zk.connect=XXX
> zk.connectiontimeout.ms=1000000
>
> Here is my producer config:
>
> zk.connect=XXX
> producer.type=async
> compression.codec=0
>
> Here is my consumer config:
>
> zk.connect=XXX
> zk.connectiontimeout.ms=100000
> groupid=XXX
> autooffset.reset=smallest
> socket.buffersize=1048576
> fetch.size=10485760
> queuedchunks.max=10000
>
> Thanks for any assistance you can provide,
>
> Andrew