Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Handlers being blocked during reads


Copy link to this message
-
Re: Handlers being blocked during reads
Elliott Clark 2013-07-30, 05:16
Yeah I've seen this a couple of times lately.  CopyOnWrite actually
has a non-linear time under a lock as the number of items increases.
It can be mitigated by making sure to close scanners.

ResultScanner res = null;
try {
 // open and read from scanner here
} finally {
 if (res != null) res.close();
}
On Mon, Jul 29, 2013 at 9:08 PM, lars hofhansl <[EMAIL PROTECTED]> wrote:
> CopyOnWriteArraySet seems a curious choice here. Bad when modified frequently. ConcurrentHashMap seems like a better choice.
> Mind filing a jira, Pablo? Then we can discuss the issue there.
>
> Thanks.
>
> -- Lars
>
>
>
> ----- Original Message -----
> From: Pablo Medina <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Cc:
> Sent: Monday, July 29, 2013 7:33 PM
> Subject: Handlers being blocked during reads
>
> Hi all,
>
> I'm having a lot of handlers (90 - 300 aprox) being blocked when reading
> rows. They are blocked during changedReaderObserver registration.
> Does anybody else run into the same issue?
>
> Stack trace:
>
> "IPC Server handler 99 on 60020" daemon prio=10 tid=0x0000000041c84000
> nid=0x2244 waiting on condition [0x00007ff51fefd000]
>    java.lang.Thread.State: WAITING (parking)
>         at sun.misc.Unsafe.park(Native Method)
>         - parking to wait for  <0x00000000c5c13ae8> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>         at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
>         at
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
>         at
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
>         at
> java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553)
>         at
> java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221)
>         at
> org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085)
>         at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:138)
>         at
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3755)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
>         at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
>
> Thanks,
> Pablo.
>