Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka, mail # dev - Re: Abou Kafka 0.8 producer throughput test


Copy link to this message
-
Re: Abou Kafka 0.8 producer throughput test
S Ahmed 2013-01-23, 03:11
Neha,

I see, so that is a fairly substantial change, ofcourse it has its
advantage of guaranteeing a higher degree of durability but as a
significant cost (round trip that the consumer has to wait for).  I know
someone mentioned creating a asych. consumer with a future.

Do you have a 'gut' feeling performance will be the same as in .7 or x%
slower?  (or you have no idea as of yet as you guys are still going to work
on perf)
On Fri, Jan 18, 2013 at 8:42 PM, Neha Narkhede <[EMAIL PROTECTED]>wrote:

> >> producer.num.acks=0
>
> There is still a difference between the 0.7 and 0.8 Kafka behavior in the
> sense that in 0.7, the producer fired away requests at the broker without
> waiting for an ack. In 0.8, even with num.acks=0, the producer writes are
> going to be synchronous and it won't be able to send the next request until
> the ack for the previous one comes back.
>
> Thanks,
> Neha
>
>
> On Fri, Jan 18, 2013 at 12:24 PM, S Ahmed <[EMAIL PROTECTED]> wrote:
>
> > I see ok, so if you wanted to compare .7 with .8 on the same footing,
> then
> > you would set it to 0 right? (since 0.7 is fire and forget)
> >
> > producer.num.acks=0
> >
> >
> > On Thu, Jan 17, 2013 at 11:45 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
> >
> > > I means wait for the data reaches all replicas (that are in sync).
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Thu, Jan 17, 2013 at 6:42 PM, S Ahmed <[EMAIL PROTECTED]> wrote:
> > >
> > > > producer.num.acks=-1 means what sorry? is it that all replica's are
> > > written
> > > > too?
> > > >
> > > >
> > > > On Thu, Jan 17, 2013 at 12:09 PM, Neha Narkhede <
> > [EMAIL PROTECTED]
> > > > >wrote:
> > > >
> > > > > Looks like Jun's email didn't format the output properly. I've
> > > published
> > > > > some preliminary producer throughput performance numbers on our
> > > > performance
> > > > > wiki -
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing#Performancetesting-Producerthroughput
> > > > >
> > > > > These tests measure producer throughput in the worst case scenario
> > > > > (producer.num.acks=-1) i.e. max durability setting. The baseline
> with
> > > 0.7
> > > > > would be to compare producer throughput with num.acks=0. We are
> > working
> > > > on
> > > > > those tests now.
> > > > >
> > > > > Thanks,
> > > > > Neha
> > > > >
> > > > >
> > > > > On Thu, Jan 17, 2013 at 8:43 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
> > > > >
> > > > > > We also did some perf test on 0.8 using the following command.
> All
> > > > > configs
> > > > > > on the broker are the defaults.
> > > > > > bin/kafka-run-class.sh kafka.perf.ProducerPerformance
> --broker-list
> > > > > > localhost:9092 --initial-message-id 0 --messages 2000000 --topics
> > > > > topic_001
> > > > > > --request-num-acks -1 --batch-size 100 --threads 1 --message-size
> > > 1024
> > > > > > --compression-codec 0
> > > > > >
> > > > > > The following is our preliminary result. Could you try this on
> your
> > > > > > environment? For replication factor larger than 1, we will try
> > ack=1
> > > > and
> > > > > > report the numbers later. It should provide better throughput.
> > > Thanks,
> > > > > >
> > > > > > *No. of Brokers = 1 / Replication Factor = 1 (Partition =
> > > 1)**Producer
> > > > > > threads**comp**msg size**Acks**batch**Thru Put
> > > > > > (MB/s)*101024-115.49201024-11
> > > > > >
> > > >
> > 9.38501024-1116.611001024-1119.54101024-15025.72201024-15039.25501024-150
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 54.171001024-15056.71101024-110027.97201024-110045.05501024-110058.011001024
> > > > > > -110059.82*No. of Brokers = 2 / Replication Factor = 2
> (Partitions
> > =
> > > > > > 1)**Producer
> > > > > > threads**comp**msg size**Acks**batch**Thru Put
> > > > > > (MB/s)*101024-110.58201024-11
> > > > > >
> > > > >
> > > >
> > >
> >
> 1.17501024-111.601001024-113.15101024-1507.48201024-15013.89501024-15018.11
> > > > > >
> > > > >
> > > >
> > >
> >