Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> java.lang.NegativeArraySizeException: -1 in hbase


+
Job Thomas 2013-09-04, 05:38
+
Ted Yu 2013-09-04, 06:10
+
Job Thomas 2013-09-04, 09:59
+
Jean-Marc Spaggiari 2013-09-04, 11:29
+
Jean-Marc Spaggiari 2013-09-09, 03:12
+
Jean-Marc Spaggiari 2013-09-09, 13:08
+
lars hofhansl 2013-09-09, 17:54
Copy link to this message
-
Re: java.lang.NegativeArraySizeException: -1 in hbase
Thats sound correct.  Can we mention it some where in our doc? Will that be
good?

-Anoop-

On Mon, Sep 9, 2013 at 11:24 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:

> The 0.94.5 change (presumably HBASE-3996) is only forward compatible. M/R
> is a bit special in the jars are shipped with the job.
>
> Here's a comment from Todd Lipcon on that issue:
> "The jar on the JT doesn't matter. Split computation and interpretation
> happens only in the user code – i.e on the client machine and inside the
>  tasks themselves. So you don't need HBase installed on the JT at all.
> As for the TTs, it's possible to configure the TTs to put an hbase jar
> on the classpath, but I usually recommend against it for the exact
> reason you're mentioning - if the jars differ in version, and they're
> not 100% API compatible, you can get nasty  errors. The recommended
> deployment is to not put hbase on the TT classpath, and instead ship the
> HBase dependencies as part of the MR job, using the provided
> utility function in TableMapReduceUtil."
>
> -- Lars
>
>
> ----- Original Message -----
> From: Jean-Marc Spaggiari <[EMAIL PROTECTED]>
> To: user <[EMAIL PROTECTED]>
> Cc:
> Sent: Monday, September 9, 2013 6:08 AM
> Subject: Re: java.lang.NegativeArraySizeException: -1 in hbase
>
>  So. After some internal discussions with Anoop, here is a summary of the
> situation.
>
> An hbase-0.94.0 jar file was included in the MR job client file. But also,
> this MR client file was stored into the Master lib directory. And only in
> the master and the  RS hosted on the same host. Not in any of the other RS
> nodes.
>
> Removing this file from the client, recompiling HBase 0.94.12-SNAPSHOT and
> redeploying everything fixed the issue.
>
> What does it mean.
>
> I think there is something between Hbase 0.94.0 and HBase 0.94.12 which is
> not compatible. It's not related to the TableSplit class. This class is
> like that since 0.94.5. It's most probably related to a more recent
> modification which is breaking the compatibility between HBase 0.94.0 and
> last HBase 0.94 branch.
>
> The MR job on my server was running for months without any issue, with this
> 0.94.0 jar included. Which mean the compatibility has been broken recently.
> Something like between 0.94.10 and 0.94.12 (I guess).
>
> Now, even if 0.94.12 is not compatible with HBase version < 0.94.5. Is this
> something we want to investigate further? Or 0.94.5 versions are already
> too old and if there is some break of the compatibility we can live with
> that?
>
> JM
>
>
> 2013/9/8 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
>
> > FYI,
> >
> > I just faced the exact same exception with version 0.94.12SNAPSHOT... All
> > tasks failed with the same exception
> >
> > $ bin/hbase hbck
> > Version: 0.94.12-SNAPSHOT
> > ....
> > 0 inconsistencies detected.
> > Status: OK
> >
> > I will update, rebuild and retry tomorrow morning...
> >
> > java.lang.NegativeArraySizeException: -1
> >     at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:148)
> >
> >     at
> >
> org.apache.hadoop.hbase.mapreduce.TableSplit.readFields(TableSplit.java:133)
> >     at
> >
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
> >     at
> >
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
> >     at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:396)
> >     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:728)
> >     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
> >     at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >
> >     at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> >     at org.apache.hadoop.mapred.Child.main(Child.java:249)
> >
> >