Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka, mail # user - Kafka crashed after multiple topics were added


Copy link to this message
-
Re: Kafka crashed after multiple topics were added
Jay Kreps 2013-08-15, 23:08
The tradeoff is there:
Pro: more partitions means more consumer parallelism. The total
threads/processes across all consumer machines can't exceed the consumer
count.
Con: more partitions mean more file descriptors and hence smaller writes to
each file (so more random io).

Our setting is fairly random. The ideal number would be the smallest number
that satisfies your forceable need for consumer parallelism.

-Jay
On Thu, Aug 15, 2013 at 3:41 PM, Vadim Keylis <[EMAIL PROTECTED]> wrote:

> Jay. Thanks so much for explaining. What is the optimal number of
> partitions per topic? What are the reasoning were behind your guys choice
> of 8 partitions per topic?
>
> Thanks,
> Vadim
>
>
> On Thu, Aug 15, 2013 at 1:58 PM, Jay Kreps <[EMAIL PROTECTED]> wrote:
>
> > Technically it is
> >   topics * partitions * replicas * 2 (index file and log file) + #open
> > sockets
> >
> > -Jay
> >
> >
> > On Thu, Aug 15, 2013 at 11:49 AM, Vadim Keylis <[EMAIL PROTECTED]
> > >wrote:
> >
> > > Good Morning Joel. Just to understand clearly how to predict number of
> > open
> > > files kept by kafka.
> > >
> > > That is calculated by  multiplying number of topics * number of
> > partitions
> > > * number of replicas. In our case it would be 150 * 36 * 3. Am I
> correct?
> > > How number of producers and consumers will influence/impact that
> > > calculation? Is it advisable to have less partition? Does 36 partition
> > > sounds reasonable?
> > >
> > > Thanks so much in advance
> > >
> > >
> > >
> > >
> > > On Wed, Aug 14, 2013 at 9:27 AM, Joel Koshy <[EMAIL PROTECTED]>
> wrote:
> > >
> > > > We use 30k as the limit. It is largely driven by the number of
> > partitions
> > > > (including replicas), retention period and number of
> > > > simultaneous producers/consumers.
> > > >
> > > > In your case it seems you have 150 topics, 36 partitions, 3x
> > replication
> > > -
> > > > with that configuration you will definitely need to up your file
> handle
> > > > limit.
> > > >
> > > > Thanks,
> > > >
> > > > Joel
> > > >
> > > > On Wednesday, August 14, 2013, Vadim Keylis wrote:
> > > >
> > > > > Good morning Jun. Correction in terms of open file handler limit. I
> > was
> > > > > wrong. I re-ran the command  ulimit -Hn and it shows 10240. Which
> > > brings
> > > > to
> > > > > the next question. How appropriately calculate open files handler
> > > > required
> > > > > by Kafka? What is your guys settings for this field?
> > > > >
> > > > > Thanks,
> > > > > Vadim
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Aug 14, 2013 at 8:19 AM, Vadim Keylis <
> [EMAIL PROTECTED]
> > > > <javascript:;>>
> > > > > wrote:
> > > > >
> > > > > > Good morning Jun. We are using Kafka 0.8 that I built from trunk
> in
> > > > June
> > > > > > or early July. I forgot to mention that running ulimit on the
> hosts
> > > > shows
> > > > > > open file handler set to unlimited. What are the ways to recover
> > from
> > > > > last
> > > > > > error and restart Kafka ? How can I delete topic with Kafka
> service
> > > on
> > > > > all
> > > > > > host down? How many topics can Kafka support to prevent to many
> > open
> > > > file
> > > > > > exception? What did you set open file handler limit in your
> > cluster?
> > > > > >
> > > > > > Thanks so much,
> > > > > > Vadim
> > > > > >
> > > > > > Sent from my iPhone
> > > > > >
> > > > > > On Aug 14, 2013, at 7:38 AM, Jun Rao <[EMAIL PROTECTED]
> > <javascript:;>>
> > > > > wrote:
> > > > > >
> > > > > > > The first error is caused by too many open file handlers. Kafka
> > > keeps
> > > > > > each
> > > > > > > of the segment files open on the broker. So, the more
> > > > topics/partitions
> > > > > > you
> > > > > > > have, the more file handlers you need. You probably need to
> > > increase
> > > > > the
> > > > > > > open file handler limit and also monitor the # of open file
> > > handlers
> > > > so
> > > > > > > that you can get an alert when it gets close to the limit.
> > > > > > >
> > > > > > > Not sure why you get the second error on restart. Are you using