Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase Region Server crash if column size become to big

Copy link to this message
Re: HBase Region Server crash if column size become to big
Hi John,

Just to be sure. What is " the size become too big"? The size of a single
column within this row? Or the number of columns?

If it's the number of columns, you can change the batch size to get less
columns in a single call? Can you share the relevant piece of code doing
the call?

2013/9/11 John <[EMAIL PROTECTED]>

> Hi,
> I store a lot of columns for one row key and if the size become to big the
> relevant Region Server crashs if I try to get or scan the row. For example
> if I try to get the relevant row I got this error:
> 2013-09-11 12:46:43,696 WARN org.apache.hadoop.ipc.HBaseServer:
> (operationTooLarge): {"processingtimems":3091,"client":"
> ","ti$
> If I try to load the relevant row via Apache Pig and the HBaseStorage
> Loader (use the scan operation) I got this message and after that the
> Region Servers crashs:
> 2013-09-11 10:30:23,542 WARN org.apache.hadoop.ipc.HBaseServer:
> (responseTooLarge):
> {"processingtimems":1851,"call":"next(-588368116791418695,
> 1), rpc version=1, client version=29,$
> I'm using Cloudera 4.4.0 with 0.94.6-cdh4.4.0
> Any clues?
> regards