Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper >> mail # user >> C binding zoo_get(): need to know max data length in advance


Copy link to this message
-
Re: C binding zoo_get(): need to know max data length in advance
I think this is a pretty common use-case actually. If one client has put
something into zookeeper and another client is trying to pull it out it may
not know it advance how big the data client#1 put in. What we do locally is
have a wrapper around zoo_get that starts with a reasonable default for
buffer_len. If it fails b/c the data is larger than that, then you can
inspect the size inside the returned Stat* and then re-issue the get with
the correct value.
On Fri, Jan 10, 2014 at 10:02 AM, Raúl Gutiérrez Segalés <
[EMAIL PROTECTED]> wrote:

> On 10 January 2014 08:04, Kah-Chan Low <[EMAIL PROTECTED]> wrote:
>
> > int zoo_get(zhandle_t *zh, const char *path, int watch, char *buffer,
> >         int* buffer_len, struct Stat *stat)
> >
> > Developer has to anticipate the max. size of the node data.  Is there any
> > way to get around this?
> >
>
> In which case would you not know the size?
>
> -rgs
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB