Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # user >> raise 1m max node data size


+
Will Johnson 2011-08-11, 13:08
Copy link to this message
-
Re: raise 1m max node data size
I think you can achieve this by using the command line argument to the
server:

-Djute.maxbuffer=<bytes>

~Jared

On Thu, Aug 11, 2011 at 7:08 AM, Will Johnson
<[EMAIL PROTECTED]>wrote:

> We have a situation where 99.9% of all data stored in zookeeper will be
> well
> under the 1mb limit (probably under 1k as well) but there is a small
> possibility that at some point users may do something to cross that
> barrier.  I'd like to raise the max to a higher number realizing that if we
> do hit that case, performance may suffer but it's better than having the
> app
> crash.  i've looked through the docs and code and tried changing the
> org.apache.zookeeper.server.quorum.QuorumCnxManager.PACKETMAXSIZE to a
> larger number but something still seems to be blocking my test of larger
> data sizes.
>
> Is there some configuration parameter I am missing or code change i can
> make?  Or have people solved this another way?  My first inclination was to
> split up larger data streams across multiple nodes but that seems to cause
> lots of problems with watches and atomicity that i don't think are easily
> solvable.
>
> - will
>
+
Ted Dunning 2011-08-11, 17:20
+
Jared Cantwell 2011-08-11, 18:00
+
Ted Dunning 2011-08-11, 20:16
+
Jared Cantwell 2011-08-11, 20:29
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB