Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase Region Server crash if column size become to big


Copy link to this message
-
Re: HBase Region Server crash if column size become to big
@Kevin: I changed the  hbase.client.keyvalue.maxsize from 10MB to 500MB,
but the regionserver still crashs. How can i change the batch size in the
hbase shell? Whats OOME?

@Dhaval: there is only the *.out file in /var/log/hbase. Is the .log file
located in another directory?
2013/9/11 Kevin O'dell <[EMAIL PROTECTED]>

> You can also check the messages file in /var/log.  The OOME may also be
> there as well.  I would be willing to bet this is a batching issue.
>
>
> On Wed, Sep 11, 2013 at 11:15 AM, Dhaval Shah
> <[EMAIL PROTECTED]>wrote:
>
> > John can you check the .out file as well. We used to have a similar issue
> > and turned out that query for such a large row ran the region server out
> of
> > memory causing the crash and oome does not show up in the .log files but
> > rather in the .out files.
> >
> > In such a situation setBatch for scans or column pagination filter for
> > gets can help your case
> >
> > Sent from Yahoo! Mail on Android
> >
> >
>
>
> --
> Kevin O'Dell
> Systems Engineer, Cloudera
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB