Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka, mail # user - Kafka crashed after multiple topics were added


+
Vadim Keylis 2013-08-14, 06:04
+
Jun Rao 2013-08-14, 14:39
+
Vadim Keylis 2013-08-14, 15:20
+
Vadim Keylis 2013-08-14, 16:01
+
Joel Koshy 2013-08-14, 16:27
+
Vadim Keylis 2013-08-14, 16:47
+
Vadim Keylis 2013-08-14, 17:32
Copy link to this message
-
Re: Kafka crashed after multiple topics were added
Joel Koshy 2013-08-14, 20:59
> One more question. What is the optimal number partition per topic to have?

>> Do you guys have hard set limit on a maximum topics Kafka can support. Are
>> there any other OS level settings I should be concerned that may cause
>> kafka to crash.

These would be highly specific to capacity planning for your use
cases, but you would typically need to take into account the volume of
each topic, desired consumer parallelism, available hardware and so
on. We have an operations wiki
(https://cwiki.apache.org/confluence/display/KAFKA/Operations), but
definitely needs some updates for 0.8.

>> I am still trying to understand how to recover from failure and start
>> service.
>>
>> The following error causes kafka not to restart
>> [2013-08-13 17:20:08,992] FATAL Fatal error during KafkaServerStable
>> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
>> java.lang.IllegalStateException: Found log file with no corresponding
>> index file.

Not sure how you got into that state. It could be that while a log
segment was being created you ran out of file handles - i.e,. the log
file was created but not the index file although I would have to look
at the code more closely to confirm. In any event, I think in this
case you would just need to delete these log files from disk.

>>
>>
>> On Wed, Aug 14, 2013 at 9:27 AM, Joel Koshy <[EMAIL PROTECTED]> wrote:
>>
>>> We use 30k as the limit. It is largely driven by the number of partitions
>>> (including replicas), retention period and number of
>>> simultaneous producers/consumers.
>>>
>>> In your case it seems you have 150 topics, 36 partitions, 3x replication -
>>> with that configuration you will definitely need to up your file handle
>>> limit.
>>>
>>> Thanks,
>>>
>>> Joel
>>>
>>> On Wednesday, August 14, 2013, Vadim Keylis wrote:
>>>
>>> > Good morning Jun. Correction in terms of open file handler limit. I was
>>> > wrong. I re-ran the command  ulimit -Hn and it shows 10240. Which
>>> brings to
>>> > the next question. How appropriately calculate open files handler
>>> required
>>> > by Kafka? What is your guys settings for this field?
>>> >
>>> > Thanks,
>>> > Vadim
>>> >
>>> >
>>> >
>>> > On Wed, Aug 14, 2013 at 8:19 AM, Vadim Keylis <[EMAIL PROTECTED]
>>> <javascript:;>>
>>> > wrote:
>>> >
>>> > > Good morning Jun. We are using Kafka 0.8 that I built from trunk in
>>> June
>>> > > or early July. I forgot to mention that running ulimit on the hosts
>>> shows
>>> > > open file handler set to unlimited. What are the ways to recover from
>>> > last
>>> > > error and restart Kafka ? How can I delete topic with Kafka service on
>>> > all
>>> > > host down? How many topics can Kafka support to prevent to many open
>>> file
>>> > > exception? What did you set open file handler limit in your cluster?
>>> > >
>>> > > Thanks so much,
>>> > > Vadim
>>> > >
>>> > > Sent from my iPhone
>>> > >
>>> > > On Aug 14, 2013, at 7:38 AM, Jun Rao <[EMAIL PROTECTED]<javascript:;>>
>>> > wrote:
>>> > >
>>> > > > The first error is caused by too many open file handlers. Kafka
>>> keeps
>>> > > each
>>> > > > of the segment files open on the broker. So, the more
>>> topics/partitions
>>> > > you
>>> > > > have, the more file handlers you need. You probably need to increase
>>> > the
>>> > > > open file handler limit and also monitor the # of open file
>>> handlers so
>>> > > > that you can get an alert when it gets close to the limit.
>>> > > >
>>> > > > Not sure why you get the second error on restart. Are you using the
>>> 0.8
>>> > > > beta1 release?
>>> > > >
>>> > > > Thanks,
>>> > > >
>>> > > > Jun
>>> > > >
>>> > > >
>>> > > > On Tue, Aug 13, 2013 at 11:04 PM, Vadim Keylis <
>>> [EMAIL PROTECTED]<javascript:;>
>>> > > >wrote:
>>> > > >
>>> > > >> We have 3 node kafka cluster. I initially created 4 topics.
>>> > > >> I wrote small shell script to create 150 topics.
>>> > > >>
>>> > > >> TOPICS=$(< $1)
>>> > > >> for topic in $TOPICS
>>> > > >> do
>>> > > >>   echo "/usr/local/kafka/bin/kafka-create-topic.sh --replica 3

On Wed, Aug 14, 2013 at 10:31 AM, Vadim Keylis <[EMAIL PROTECTED]> wrote:

 
+
Vadim Keylis 2013-08-15, 18:54
+
Jay Kreps 2013-08-15, 20:59
+
Vadim Keylis 2013-08-15, 22:41
+
Jay Kreps 2013-08-15, 23:08
+
Vadim Keylis 2013-08-15, 23:38
+
Jun Rao 2013-08-16, 04:04