Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Performance test results


Copy link to this message
-
Re: Performance test results
Eran Kutner 2011-03-31, 16:33
I assume the block cache tunning key you talk about is
"hfile.block.cache.size", right? If it is only 20% by default than
what is the rest of the heap used for? Since there are no fancy
operations like joins and since I'm not using memory tables the only
thing I can think of is the memstore right? What is the recommended
value for the block cache?

As for the regions layout, right now the table in discussion has 264
regions more or less evenly distributed among the 5 region servers.
Let me know what other information I can provide.

The key space is as follows: I launch n threads, each thread writes
keys that look like "streami_c" where "i" is the thread index (1-n)
and "c" is a counter that goes up from 1 until I stop the test. I
understand that each thread is only writing to the tail of its own
keyspace so only "n" region files can be used, however if that was the
limitation then adding more threads each with its own keyspace should
have increased the throughput.
-eran

On Wed, Mar 30, 2011 at 00:25, Jean-Daniel Cryans <[EMAIL PROTECTED]> wrote:
>
> Inline.
>
> J-D
>
> > Hi J-D,
> > I can't paste the entire file because it's 126K. Trying to attach it
> > now as zip, lets see if that has more luck.
>
> In the jstack you posted, all the Gets were hitting HDFS which is
> probably why it's slow. Until you can get something like HDFS-347 in
> your Hadoop you'll have to make sure you can block cache most of what
> what you're going to read. You can tune the size of the block cache
> since by default it's only 20% of the whole heap.
>
> >
> > I didn't pre-split and I guess that explains the behavior I saw in
> > which the write performance started at 300 inserts/sec and then
> > increased up to 3000 per server when the region was split and spread
> > to two servers. It doesn't explain why the rate actually dropped after
> > more splits and more servers were added to the table, until eventually
> > it stabilized on around 2000 inserts/sec per server.
>
> Yeah that doesn't explain it, but for that part of the loading we
> basically have 0 information about the regions' layout on the cluster
> and how the regions were used. 3k might just be a spike that didn't
> last super long and for all I know it should not be cared about. Was
> the 2k/sec done by just one machine or they were all participating
> equally? How many regions did you end up with at the end?
>
> >
> > I have 1 thrift server per slave. I'm using C# to access the thirft
> > servers. My C# library manages its own connection pool, it does
> > round-robin between the servers and re-uses open connections, so not
> > every call will open a new connection. After a few seconds of running
> > the test all the connections are re-used and no new connections are
> > being opened.
>
> Sounds good.
>
> >
> > I'm inserting the rows one by one because that represent the kind of
> > OLTP load that I have in mind for this system. Batching multiple rows,
> > I believe, is more suitable for analytical processing.
>
> Makes sense.
>
> >
> > The second client was using the same key space, but I tried the single
> > client with a few thread configurations, from 1 to 100, where each
> > thread was using a different key space, I didn't really see any
> > difference between 50 threads and 100 threads, so I don't think it's a
> > key space distribution issue.
>
> That part doesn't make sense at all, there must be something you're
> not seeing that would explain that. Like number of regions and their
> layout. Also maybe your assumptions about the key spaces are wrong (by
> experience I always assume the user is wrong, sorry).
>
> >
> > I agree that network latency can be causing the problem but then I
> > would expect to see more overall reads/writes as the client thread
> > count increases, as I said above 40-50 thread there was no
> > improvement.
>
> Indeed, something is off and we're not seeing it.