Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper >> mail # user >> C binding zoo_get(): need to know max data length in advance


Copy link to this message
-
Re: C binding zoo_get(): need to know max data length in advance
Where is this jute.maxbuffer defined?
I have been looking for a value like that.
The max znode data size is configurable.  Is there any way to access this value from a zookeeper client?
On Friday, January 10, 2014 2:03 PM, Raúl Gutiérrez Segalés <[EMAIL PROTECTED]> wrote:
 
On 10 January 2014 09:13, Marshall McMullen <[EMAIL PROTECTED]> wrote:

I think this is a pretty common use-case actually. If one client has put
>something into zookeeper and another client is trying to pull it out it may
>not know it advance how big the data client#1 put in. What we do locally is
>have a wrapper around zoo_get that starts with a reasonable default for
>buffer_len. If it fails b/c the data is larger than that, then you can
>inspect the size inside the returned Stat* and then re-issue the get with
>the correct value.
>

Heh, too early - read zoo_create instead of zoo_get.  I guess that makes sense in memory constrained environments but most of the time you'll probably be serializing your zoo_get calls and you can do away with a statically allocated buf to jute.maxbuffer, no?
-rgs
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB