Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper >> mail # user >> Memory leaks in zoo_multi API


Copy link to this message
-
Re: Memory leaks in zoo_multi API
We've done massive leak detection against this code with tcmalloc's debug
library and not seen a memory leak. And we used Multi ops almost
exclusively.
Perhaps valgrind is doing a better job of finding the leak than tcmalloc.

Are you using the synchronous or asynchronous version of multi?
e.g. zoo_multi or zoo_amulti ?

On Fri, Oct 12, 2012 at 1:59 PM, Deepak Jagtap <[EMAIL PROTECTED]>wrote:

> Hi,
>
> I am using zookeeper-3.4.4 and frequently using multiupdate operations.
>
> While running valgrind it returned following output:
>
> ==4056== 2,240 (160 direct, 2,080 indirect) bytes in 1 blocks are
> definitely lost in loss record 18 of 24
> ==4056==    at 0x4A04A28: calloc (vg_replace_malloc.c:467)
> ==4056==    by 0x504D822: create_completion_entry (zookeeper.c:2322)
> ==4056==    by 0x5052833: zoo_amulti (zookeeper.c:3141)
> ==4056==    by 0x5052A8B: zoo_multi (zookeeper.c:3240)
>
> Just curious do I need explicitly need to handle this cleanup, by
> invoking some API or is this a memory leak?
>
> It looks like completion entries for individual operations in
> multiupdate transaction are not getting freed. The memory leak size
> depends on the number of operations in single mutlipupdate
> transaction.
>
> Thanks & Regards,
>
> Deepak
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB