Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Bulkload Problem

Copy link to this message
Re: Bulkload Problem
Hi John,

Is your table pre-splitted?

for me, sound like your RS is too busy doing other jobs to reply back to
the client.

Multiple options.
1) It's due to a long garbage collection. Can you monitor it on your
2) It's because the table is not pre-split and the server is working on
that and taking time.

How many servers to you have for this test?

2013/10/20 John <[EMAIL PROTECTED]>

> Hi,
> I try to load a big amount of data into a hbase cluster. I've imported
> successfully up to 3000 Millionen Datasets (KV Pairs). But if I try to
> import 6000 Millionen I got this error after 60-95% of the import:
> http://pastebin.com/CCp6kS3m ...
> The System is not crashing or anything like this, All nodes are still up.
> It seems to me that one node is temporarily not available. Maybe is it
> possibel to increase the repeat-number? (I think its default 10). What
> value do I have to change for that?
> I'm using Cloudera 4.4.0-1 and the Hbase version 0.94.6-cdh4.4.0
> regards,
> john