Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> Capacity planning for kafka

Copy link to this message
Re: Capacity planning for kafka
In general, Kafka brokers are low in CPU, memory and I/Os. We do rely on
the broker server to cache all recent data in pagecache. The biggest
contraint is often the disk space, especially if you keep the default
retention time for 7 days. From this perspective, HDDs are better than SSDs
since the per MB cost is lower.


On Tue, May 14, 2013 at 11:31 PM, anand nalya <[EMAIL PROTECTED]> wrote:

> Hi,
> We are capacity planning for kafka deployment (Replication factor 3) in
> production environment, the producer is producing data at 1.5Gbps. Total
> number of producers will be around 500 and there will be 100 consumers. How
> many cores would be required to support them? And are there any known
> repercussions of running kafka brokers along with other processor intensive
> processes. Disks will still be separate for kafka and other processes.
> We are also trying to decide between SSDs and HDDs. Are there any knows
> production deployment of kafka over SSDs?
> Thanks,
> Anand