Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # user >> ZooKeeper Memory Usage


+
Mike Schilli 2012-02-09, 01:23
+
César Álvarez Núñez 2012-02-09, 10:05
+
Camille Fournier 2012-02-09, 14:14
+
César Álvarez Núñez 2012-02-09, 14:47
+
Camille Fournier 2012-02-09, 14:59
+
César Álvarez Núñez 2012-02-09, 16:33
Copy link to this message
-
Re: ZooKeeper Memory Usage
This is interesting and important.

Cesar, what jvm options are you running with? Can you the options in:

https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting

Atleast get the GC logs that we can look at?

This will be very interesting.

mahadev
2012/2/9 César Álvarez Núñez <[EMAIL PROTECTED]>:
> In my case, our stress test show up a linear increase of "tenured memory"
> from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
> keeps "tenured memory" stable and < 10MiB.
>
> The stress test performs many zNodes creation and delete but the overall zk
> usage at any moment in time was relative small.
>
> BR,
> /César.
>
> On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <[EMAIL PROTECTED]> wrote:
>
>> This is really a question about how the jvm grows its heaps and resizes
>> them. If the jvm cannot allocate enough memory for the process because you
>> didn't set the max memory high enough, it will fall over. Zookeeper keeps
>> its entire state in memory for performance reasons, if it were to swap that
>> would be quite bad for performance.
>>
>> C
>> On Feb 8, 2012 8:23 PM, "Mike Schilli" <[EMAIL PROTECTED]> wrote:
>>
>> > We've got a ZooKeeper instance that's using about 5 GB of resident
>> > memory. Every time we restart it, it starts at 200MB, and then grows
>> > slowly until it is back at 5 GB.
>> >
>> > The large footprint is related to how much data we've got in there.
>> > What's interesting, though, is that the process size doesn't shrink if
>> > we purge some of the data.
>> >
>> > Now, this isn't a big problem, I'm just curious if the process will fall
>> > over at some point if it can't get more memory or if it'll just make due
>> > by caching less data.
>> >
>> > Also, if I remember correctly, there's a configuration variable to set
>> > the maximum size, what happens if ZK reaches that?
>> >
>> > -- -- Mike
>> >
>> > Mike Schilli
>> > [EMAIL PROTECTED]
>> >
>>

--
Mahadev Konar
Hortonworks Inc.
http://hortonworks.com/
+
Ariel Weisberg 2012-02-09, 22:25
+
César Álvarez Núñez 2012-02-10, 11:32
+
Mahadev Konar 2012-02-10, 17:31
+
Neha Narkhede 2012-03-02, 23:18
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB