Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase Export MR - Some mappers getting Stuck


Copy link to this message
-
Re: HBase Export MR - Some mappers getting Stuck
Can you take a look at region server log when this happens and see if there
is some clue ?

jstack on region server side would help.

Cheers

On Mon, Apr 29, 2013 at 10:42 PM, Ashwanth Kumar <
[EMAIL PROTECTED]> wrote:

> Hey,
>
> I have this issue where in some mappers get stuck mid-way while running an
> HBase Export.
>
> jstack on the Task gives me this --
>
> "main" prio=10 tid=0x00007f535800a800 nid=0x1c72 in Object.wait()
> [0x00007f535f5e4000]
>    java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x00000000eb1d6598> (a
> org.apache.hadoop.hbase.ipc.HBaseClient$Call)
>  at java.lang.Object.wait(Object.java:503)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:904)
>  - locked <0x00000000eb1d6598> (a
> org.apache.hadoop.hbase.ipc.HBaseClient$Call)
> at
>
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>  at $Proxy7.next(Unknown Source)
> at
>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:80)
>  at
>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
> at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
>  at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1293)
> at
>
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:133)
>  at
>
> org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
> at
>
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
>  at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>  at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>  at org.apache.hadoop.mapred.Child.main(Child.java:249)
>
> If I fail / kill the attempt once, the task get's completed without any
> issues.
>
> I am not able to replicate the issue all the time, but happens at frequent
> intervals though.
>
> HBase Version - 0.94.2
> Hadoop Version - 1.0.4
> Client Scanner Caching - 500
>
> Also when I check the counters, I have some multiple of 500 records as Map
> Input Records.
>
> Any insight to why this could be happening?
>
>
> --
>
> Ashwanth Kumar / ashwanthkumar.in
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB