Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> one producer and 2 consumers


Copy link to this message
-
Re: one producer and 2 consumers
Have you looked at #4 in http://kafka.apache.org/faq.html ?

Thanks,

Jun
On Fri, Apr 26, 2013 at 8:19 AM, Oleg Ruchovets <[EMAIL PROTECTED]>wrote:

> Hi.
>    I have simple kafka producer/consumer application. I have one producer
> and 2 consumers. consumers has the same code , it is just executed it in
> different threads. For some reason information produced by producer
> consumed only by ONE CONSUMER.Second consumer didn't consumed any
> information. May be I have to add additional configuration parameters?
>
> I need is the following:
> 1)Producer produce one event.
> 2)Two consumers and each consumer consumes the  event one time.
>
> For example :
> 1)Producer1 produces message Message1.
> 2)Consumer1  consumes Message1.
> 3)Consumer2 consumes Message1.
>
> What is the way to got such functionality?
>
> Thanks
> Oleg.
>
>
> On Sun, Apr 21, 2013 at 7:21 PM, Philip O'Toole <[EMAIL PROTECTED]> wrote:
>
> > OK, if you want each consumer to process the same data, then simply
> > point each consumer at your Kafka cluster and have each Consumer
> > consume all data. There is no synchronization required between those
> > two consumers.
> >
> > In other words, what you want to do is fine. Please read the Kafka
> > design doc if you have not done so:
> >
> > http://kafka.apache.org/design.html
> >
> > Philip
> >
> > On Sun, Apr 21, 2013 at 9:16 AM, Oleg Ruchovets <[EMAIL PROTECTED]>
> > wrote:
> > > I am on POC stage , so I can configure the producer to write in
> different
> > > partitions.
> > > But how it will help me to process the same data with two consumers.
> > >
> > > I try to get such effect:
> > >   I got the data and store it to Kafka.
> > >
> > > I have 2 consumers:
> > >    1) for real time which consumes the data for example every 10
> seconds.
> > >    2) for move data to hdfs - for example every 1 hour.
> > >
> > > But in case I'll use 2 partitions , each consumer process part of the
> > data
> > > (50%). Does it correct?
> > > I need that 2 consumers will produce 100% of the data.
> > >
> > > Please advice.
> > >
> > >
> > > On Sun, Apr 21, 2013 at 12:00 PM, Philip O'Toole <[EMAIL PROTECTED]>
> > wrote:
> > >
> > >> On Sun, Apr 21, 2013 at 8:53 AM, Oleg Ruchovets <[EMAIL PROTECTED]
> >
> > >> wrote:
> > >> > Hi Philip.
> > >> >    Does it mean to store the same data twice - each time to
> different
> > >> > partition? I tried to save data only one time. Using two partitions
> > means
> > >> > to store data twice?
> > >>
> > >> No, I mean spreading the data across the two partitions, so 50% goes
> > >> in one, and 50% goes in the other. Have your Producer always write to
> > >> partition "-1", which will tell Kafka to select a partition at random
> > >> for each message.
> > >>
> > >> Then one of the Consumers will consume partition 0, the other
> partition
> > 1.
> > >>
> > >> > By the way I am using  kafka 0.7.2.
> > >> >
> > >> > Thanks
> > >> > Oleg.
> > >> >
> > >> >
> > >> > On Sun, Apr 21, 2013 at 11:30 AM, Philip O'Toole <[EMAIL PROTECTED]
> >
> > >> wrote:
> > >> >
> > >> >> Read the design doc on the Kafka site.
> > >> >>
> > >> >> The short answer is to use two partitions for your topic.
> > >> >>
> > >> >> Philip
> > >> >>
> > >> >> On Apr 21, 2013, at 12:37 AM, Oleg Ruchovets <[EMAIL PROTECTED]
> >
> > >> wrote:
> > >> >>
> > >> >> > Hi,
> > >> >> >   I have one producer for kafka and have 2 consumers.
> > >> >> > I want to consume produced events to hdfs and storm. Copy to
> hdfs I
> > >> will
> > >> >> do
> > >> >> > every hour but to storm every 10 seconds.
> > >> >> >
> > >> >> > Question: Is it supported by kafka? Where can I read how to
> > organize 1
> > >> >> > producer and 2 consumers?
> > >> >> >
> > >> >> > Thanks
> > >> >> > Oleg.
> > >> >>
> > >>
> >
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB