Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - HBase Region Server crash if column size become to big


Copy link to this message
-
Re: HBase Region Server crash if column size become to big
Bryan Beaudreault 2013-09-11, 16:15
@John, I think you're going to want to limit your batch, as opposed to
raise it.  How much memory does the RegionServer get?  Are you sure the row
is only 70MB?  You could check HDFS directly by ls'ing the region
directory, or use the HFile tool.

The "errors" you have been posting are simply WARNs.  There are arbitrary
limits defined for responseTooLarge, responseTooSlow, operationTooLarge,
just to give you the ability to debug bad client calls.  They don't mean
the RS will necessarily have an issue returning the result.

Are there literally no other logs at all after that WARN?  Have you checked
the system logs for oom killer invocations?
On Wed, Sep 11, 2013 at 11:58 AM, John <[EMAIL PROTECTED]> wrote:

> @Kevin, I'm using Apache Pig to execute my programm. I wrote my own HBase
> Load UDF and added now  scan.setBatch(10000000), but it is still crashing.
>
> @Dhaval: I'm using Cloudera 4.4.0. Its nearly the default Installation from
> the cloudera manager. I have no idea why there is now log file
>
> Does anyone test the Java program and execute a get in the hbase shell?
>
>
>
>
> 2013/9/11 Dhaval Shah <[EMAIL PROTECTED]>
>
> > @Mike rows can't span multiple regions but it does not cause crashes. It
> > simply won't allow the region to split and continue to function like a
> huge
> > region. We had a similar situation long back (when we were on 256mb
> region
> > sizes) and it worked (just didn't split the region).
> >
> > Sent from Yahoo! Mail on Android
> >
> >
>