Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Lease does not exist exceptions


Copy link to this message
-
Re: Lease does not exist exceptions
Eran Kutner 2011-10-18, 13:39
Hi Stack,
Yep, reducing the number of map tasks did resolve the problem, however the
only way I found for doing it is by changing the setting in the
mapred-site.xml file, which means it will affect all my jobs. Do you know if
there is a way to limit the number of concurrent map tasks a specific job
may run? I know it was possible with the old JobConf class from the mapred
namespace but the new Job class doesn't have the setNumMapTasks() method.
Is it possible to extend the lease timeout? I'm not even sure lease on what,
HDFS blocks? What is it by default?

As for setBatch, what would be a good value? I didn't set it before and
setting it didn't seem to change anything.

Finally to answer your question regarding the intensity of the job - yes, it
is pretty intense, getting cpu and disk IO utilization to ~90%

Thanks a million!

-eran

On Tue, Oct 18, 2011 at 13:06, Stack <[EMAIL PROTECTED]> wrote:

> Look back in the mailing list Eran for more detailed answers but in
> essence, the below usually means that the client has been away from
> the server too long.  This can happen for a few reasons.  If you fetch
> lots of rows per next on a scanner, processing the batch client side
> may be taking you longer than the lease timeout.  Set down the
> prefetch size and see if that helps (I'm talking about this:
>
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)
> ).
>  Throw in a GC on client-side or over on the server-side and it might
> put you over your lease timeout.  Are your mapreduce jobs heavy-duty
> robbing resources from the running regionservers or datanodes?  Try
> having them run half the mappers and see if that makes it more likely
> your job will complete.
>
> St.Ack
> P.S IIRC, J-D tripped over a cause recently but I can't find it at the mo.
>
> On Tue, Oct 18, 2011 at 10:28 AM, Eran Kutner <[EMAIL PROTECTED]> wrote:
> > Hi,
> > I'm having a problem when running map/reduce on a table with about 500
> > regions.
> > The MR job shows this kind of excpetions:
> > 11/10/18 06:03:39 INFO mapred.JobClient: Task Id :
> > attempt_201110030100_0086_m_000062_0, Status : FAILED
> > org.apache.hadoop.hbase.regionserver.LeaseException:
> > org.apache.hadoop.hbase.regionserver.LeaseException: lease
> > '-334679770697295011' does not exist
> >        at
> > org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1845)
> >        at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
> >        at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1039)
> >
> >        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >        at
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >        at
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >        at
> >
> org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:96)
> >        at
> >
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:83)
> >        at
> >
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:1)
> >        at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1019)
> >        at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1151)
> >        at
> >
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:149)