Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Hardware profile


Copy link to this message
-
Re: Hardware profile
We have multiple Kafka clusters, each has about 10 brokers right now. Not
sure about the network topology. What kind of info do you want to know?

Thanks,

Jun

On Fri, Mar 29, 2013 at 11:47 AM, David Arthur <[EMAIL PROTECTED]> wrote:

> How many brokers are you (LinkedIn) running? What kind of network topology?
>
>
> On 3/29/13 2:45 PM, Neha Narkhede wrote:
>
>> 1. We never share zookeeper and broker on the same hardware. Both need
>> significant memory to operate efficiently.
>> 2. 14 drive setup is just for Kafka. We have a separate disk for the OS,
>> AFAIK.
>>
>> Thanks,
>> Neha
>>
>> On Fri, Mar 29, 2013 at 11:37 AM, Ian Friedman <[EMAIL PROTECTED]> wrote:
>>
>>> Thanks Jun. Couple more questions:
>>> 1. Do you guys have dedicated hardware for Zookeeper or do you have a
>>> few machines run both a ZK and a broker? If so, do you keep the ZK and
>>> Kafka data on separate volumes?
>>> 2. You use the 14 drive raid setup is just for Kafka data and a separate
>>> drive for the OS?
>>>
>>> Thanks again,
>>> Ian
>>>
>>>
>>> On Friday, March 29, 2013 at 12:43 PM, Jun Rao wrote:
>>>
>>>  It's more or less the same. Our new server has 14 sata disks, each of 1
>>>> TB.
>>>> The disk also has better write latency due to larger write cache.
>>>>
>>>> Thanks,
>>>>
>>>> Jun
>>>>
>>>> On Fri, Mar 29, 2013 at 8:32 AM, Ian Friedman <[EMAIL PROTECTED] (mailto:
>>>> [EMAIL PROTECTED])> wrote:
>>>>
>>>>  Hi all,
>>>>>
>>>>> I'm wondering how up to date the hardware specs listed on this page
>>>>> are:
>>>>> https://cwiki.apache.org/**confluence/display/KAFKA/**Operations<https://cwiki.apache.org/confluence/display/KAFKA/Operations>
>>>>>
>>>>> We're evaluating hardware for a Kafka broker/ZK quorum buildout and
>>>>> looking for some tips and/or sample configurations if anyone can help
>>>>> us
>>>>> out with some recommendations.
>>>>>
>>>>> Thanks in advance,
>>>>> Ian
>>>>>
>>>>>
>>>>
>>>>
>>>
>