Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # user >> How large memory I needed to the zookeeper and kafka server?


Copy link to this message
-
Re: How large memory I needed to the zookeeper and kafka server?
We've added our production deployment experience for Kafka and
Zookeeper here -
https://cwiki.apache.org/confluence/display/KAFKA/Operations

Thanks,
Neha

On Tue, Oct 30, 2012 at 8:07 AM, Matthew Rathbone
<[EMAIL PROTECTED]> wrote:
> When thinking about memory for the broker, the only thing you should
> consider is the filesystem cache. The further behind production you're
> consuming, the more memory matters (eg keeping your cache window larger
> than the gap between production and consumption).
>
> As a caveat, this is only important if you're really thrashing the brokers
> hard, we don't even see a blip if we consume from disk and we're pushing
> them as hard as we can :-).
>
>
>
> On Tue, Oct 30, 2012 at 8:21 AM, Jun Rao <[EMAIL PROTECTED]> wrote:
>
>> Yes.
>>
>> Thanks,
>>
>> Jun
>>
>> On Mon, Oct 29, 2012 at 9:34 PM, howard chen <[EMAIL PROTECTED]> wrote:
>>
>> > We only a few internal consumers, so assume they should be fine?
>> >
>> >
>> > On Tue, Oct 30, 2012 at 12:27 PM, Jun Rao <[EMAIL PROTECTED]> wrote:
>> > > Kafka broker typically doesn't need a lot memory. So 2GB is fine. ZK
>> > memory
>> > > depends on # consumers. More consumers mean more offsets written to ZK.
>> > >
>> > > Thanks,
>> > >
>> > > Jun
>> > >
>> > > On Mon, Oct 29, 2012 at 9:07 PM, howard chen <[EMAIL PROTECTED]>
>> wrote:
>> > >
>> > >> I understand the main limitation of Kafka deployment is the disk
>> space,
>> > >>
>> > >> E.g.
>> > >>
>> > >> If I generate 10GB message per day, and I have 2 nodes, and I need to
>> > >> keep for 10 days, then I need
>> > >>
>> > >> 10GB * 10 / 2 = 50GB per node (of course there are overhead, but the
>> > >> requirement is somehow proportional.)
>> > >>
>> > >> So If I deploy machinesusing the following setup, do you think they
>> > >> are reasonable?
>> > >>
>> > >> 2 x Kafka (100GB, 2xCPU x 2GB RAM)
>> > >> 3 x Zookeeper ( 10GB, 1x CPU, 512MB RAM)
>> > >>
>> > >> do you think it is okay?
>> > >>
>> >
>>
>
>
>
> --
> Matthew Rathbone
> Foursquare | Software Engineer | Server Engineering Team
> [EMAIL PROTECTED] | @rathboma <http://twitter.com/rathboma> |
> 4sq<http://foursquare.com/rathboma>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB