Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper >> mail # user >> How to delete ZNode with 200K items


Copy link to this message
-
Re: How to delete ZNode with 200K items
Hi Jordan,

We had the same problem a few months ago.
Are you getting an IOException("Unreasonable length = " + len) on client
side?
You have to set the system property "jute.maxbuffer" to a value >= <length>
on ZooKeeper client side.
/César.

Extracted from org.apache.jute.BinaryInputArchive

>     static public final int maxBuffer = determineMaxBuffer();
>     private static int determineMaxBuffer() {
>         String maxBufferString = System.getProperty("jute.maxbuffer");
>         try {
>             return Integer.parseInt(maxBufferString);
>         } catch(Exception e) {
>             return 0xfffff;
>         }
>
>     }
>     public byte[] readBuffer(String tag) throws IOException {
>         int len = readInt(tag);
>         if (len == -1) return null;
>         if (len < 0 || len > maxBuffer) {
>             throw new IOException("Unreasonable length = " + len);
>         }
>         byte[] arr = new byte[len];
>         in.readFully(arr);
>         return arr;
>     }
>

On Thu, May 24, 2012 at 11:17 PM, Jordan Zimmerman <[EMAIL PROTECTED]>
wrote:
> We have a node that has 200K items and would like to delete them.
getChildren() keeps failing. Is there anything that can be done?
>
> -JZ
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB