Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper, mail # user - C binding zoo_get(): need to know max data length in advance


Copy link to this message
-
Re: C binding zoo_get(): need to know max data length in advance
Raúl Gutiérrez Segalés 2014-01-10, 19:03
On 10 January 2014 09:13, Marshall McMullen <[EMAIL PROTECTED]>wrote:

> I think this is a pretty common use-case actually. If one client has put
> something into zookeeper and another client is trying to pull it out it may
> not know it advance how big the data client#1 put in. What we do locally is
> have a wrapper around zoo_get that starts with a reasonable default for
> buffer_len. If it fails b/c the data is larger than that, then you can
> inspect the size inside the returned Stat* and then re-issue the get with
> the correct value.
>

Heh, too early - read zoo_create instead of zoo_get.  I guess that makes
sense in memory constrained environments but most of the time you'll
probably be serializing your zoo_get calls and you can do away with a
statically allocated buf to jute.maxbuffer, no?

-rgs

>
> On Fri, Jan 10, 2014 at 10:02 AM, Raúl Gutiérrez Segalés <
> [EMAIL PROTECTED]> wrote:
>
> > On 10 January 2014 08:04, Kah-Chan Low <[EMAIL PROTECTED]> wrote:
> >
> > > int zoo_get(zhandle_t *zh, const char *path, int watch, char *buffer,
> > >         int* buffer_len, struct Stat *stat)
> > >
> > > Developer has to anticipate the max. size of the node data.  Is there
> any
> > > way to get around this?
> > >
> >
> > In which case would you not know the size?
> >
> > -rgs
> >
>