Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper >> mail # user >> raise 1m max node data size


Copy link to this message
-
Re: raise 1m max node data size
Currently, the limit on the sum of the sizes of the updates in the multi
command is still 1MB.  You cannot commit 5 1MB nodes in a multi-op.

~Jared

On Thu, Aug 11, 2011 at 11:20 AM, Ted Dunning <[EMAIL PROTECTED]> wrote:

> Another way to solve this is to use the multi command.
>
> The idea would be that you would upload multiple pieces of the large object
> separately into different znodes (without using multi)
>
> Then you would update a pointer node that has references to the pieces
> while
> controlling for the version of the pieces (using a multi).
>
> On Thu, Aug 11, 2011 at 6:08 AM, Will Johnson
> <[EMAIL PROTECTED]>wrote:
>
> > We have a situation where 99.9% of all data stored in zookeeper will be
> > well
> > under the 1mb limit (probably under 1k as well) but there is a small
> > possibility that at some point users may do something to cross that
> > barrier.  ...
> > Is there some configuration parameter I am missing or code change i can
> > make?  Or have people solved this another way?
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB