-Re: Capacity planning for kafka
Jun Rao 2013-05-16, 04:46
In general, Kafka brokers are low in CPU, memory and I/Os. We do rely on
the broker server to cache all recent data in pagecache. The biggest
contraint is often the disk space, especially if you keep the default
retention time for 7 days. From this perspective, HDDs are better than SSDs
since the per MB cost is lower.
On Tue, May 14, 2013 at 11:31 PM, anand nalya <[EMAIL PROTECTED]> wrote:
> We are capacity planning for kafka deployment (Replication factor 3) in
> production environment, the producer is producing data at 1.5Gbps. Total
> number of producers will be around 500 and there will be 100 consumers. How
> many cores would be required to support them? And are there any known
> repercussions of running kafka brokers along with other processor intensive
> processes. Disks will still be separate for kafka and other processes.
> We are also trying to decide between SSDs and HDDs. Are there any knows
> production deployment of kafka over SSDs?