Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Performance test results

Copy link to this message
Re: Performance test results
Hi J-D,
I can't paste the entire file because it's 126K. Trying to attach it
now as zip, lets see if that has more luck.
As far as I can tell most of the threads are blocked either like this:
"RMI TCP Connection(idle)" daemon prio=10 tid=0x00002aaad011d000
nid=0x269c waiting on condition [0x0000000041e4d000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x000000045c687200> (a
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:662)

   Locked ownable synchronizers:
- None

or like this:
"ResponseProcessor for block blk_2435887137905447383_11770" daemon
prio=10 tid=0x000000004f08e000 nid=0x2cb9 runnable
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked <0x000000045dbe20b0> (a sun.nio.ch.Util$2)
- locked <0x000000045dbe2098> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000004fa1a2510> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:332)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at java.io.DataInputStream.readLong(DataInputStream.java:399)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2622)

   Locked ownable synchronizers:
- None
I didn't pre-split and I guess that explains the behavior I saw in
which the write performance started at 300 inserts/sec and then
increased up to 3000 per server when the region was split and spread
to two servers. It doesn't explain why the rate actually dropped after
more splits and more servers were added to the table, until eventually
it stabilized on around 2000 inserts/sec per server.

I have 1 thrift server per slave. I'm using C# to access the thirft
servers. My C# library manages its own connection pool, it does
round-robin between the servers and re-uses open connections, so not
every call will open a new connection. After a few seconds of running
the test all the connections are re-used and no new connections are
being opened.

I'm inserting the rows one by one because that represent the kind of
OLTP load that I have in mind for this system. Batching multiple rows,
I believe, is more suitable for analytical processing.

The second client was using the same key space, but I tried the single
client with a few thread configurations, from 1 to 100, where each
thread was using a different key space, I didn't really see any
difference between 50 threads and 100 threads, so I don't think it's a
key space distribution issue.

I agree that network latency can be causing the problem but then I
would expect to see more overall reads/writes as the client thread
count increases, as I said above 40-50 thread there was no

On Tue, Mar 29, 2011 at 19:54, Jean-Daniel Cryans <[EMAIL PROTECTED]> wrote: