Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> Capacity planning for kafka


+
anand nalya 2013-05-15, 06:32
Copy link to this message
-
Re: Capacity planning for kafka
In general, Kafka brokers are low in CPU, memory and I/Os. We do rely on
the broker server to cache all recent data in pagecache. The biggest
contraint is often the disk space, especially if you keep the default
retention time for 7 days. From this perspective, HDDs are better than SSDs
since the per MB cost is lower.

Thanks,

Jun
On Tue, May 14, 2013 at 11:31 PM, anand nalya <[EMAIL PROTECTED]> wrote:

> Hi,
>
> We are capacity planning for kafka deployment (Replication factor 3) in
> production environment, the producer is producing data at 1.5Gbps. Total
> number of producers will be around 500 and there will be 100 consumers. How
> many cores would be required to support them? And are there any known
> repercussions of running kafka brokers along with other processor intensive
> processes. Disks will still be separate for kafka and other processes.
>
> We are also trying to decide between SSDs and HDDs. Are there any knows
> production deployment of kafka over SSDs?
>
> Thanks,
> Anand
>

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB