Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase Region Server crash if column size become to big


Copy link to this message
-
Re: HBase Region Server crash if column size become to big
Hey John,

  You can try upping - hbase.client.keyvalue.maxsize from 10MB to 500MB,
BUT it is there for a reason :) The response coming back in 169MB, have you
tried changing the batch size that JM referred to earlier?
On Wed, Sep 11, 2013 at 11:08 AM, John <[EMAIL PROTECTED]> wrote:

> @michael: What do you mean with wide? The size of one column? The size of
> one row is round about ~200 character. What is the region size?
>
> @ kevin: what option do I have to change?
>
> Finaly, I was able to create a little java programm to reconstruct the
> error. I wrote a little Java programm that creates a lot of columns for one
> rowkey. You can find the programm here: http://pastebin.com/TFJRtCEg
>
> After I have created 600000 columns and executing this command in the hbase
> shell:
>
> get 'mytestTable', 'sampleRowKey'
>
> The RegionServer crash again with the same error:
>
> 2013-09-11 16:58:26,546 WARN org.apache.hadoop.ipc.HBaseServer:
> (operationTooLarge): {"processingtimems":2650,"client":"192.168.0.1:34944
>
> ","timeRange":[0,9223372036854775807],"starttimems":1378911503836,"responsesize":177600006,"class":"HRegionServer","table":"mytestTable","cacheBlocks":true,"families":{"mycf":["ALL"]},"row":"sampleRowKey","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
>
> Mabye someone can test it?
>
> thanks
>
> 2013/9/11 Kevin O'dell <[EMAIL PROTECTED]>
>
> > I have not see the exact error, but if I recall correctly jobs will fail
> if
> > the column is larger than 10MB and we have not raised the default
> > setting(which I don't have in front of me) ?
> >
> >
> > On Wed, Sep 11, 2013 at 10:53 AM, Michael Segel
> > <[EMAIL PROTECTED]>wrote:
> >
> > > Just out of curiosity...
> > >
> > > How wide are the columns?
> > >
> > > What's the region size?
> > >
> > > Does anyone know the error message you'll get if your row is wider
> than a
> > > region?
> > >
> > >
> > > On Sep 11, 2013, at 9:47 AM, John <[EMAIL PROTECTED]> wrote:
> > >
> > > > sry, I mean 570000 columns, not rows
> > > >
> > > >
> > > > 2013/9/11 John <[EMAIL PROTECTED]>
> > > >
> > > >> thanks for all the answers! The only entry I got in the
> > > >> "hbase-cmf-hbase1-REGIONSERVER-mydomain.org.log.out" log file after
> I
> > > >> executing the get command in the hbase shell is this:
> > > >>
> > > >> 2013-09-11 16:38:56,175 WARN org.apache.hadoop.ipc.HBaseServer:
> > > >> (operationTooLarge): {"processingtimems":3196,"client":"
> > > 192.168.0.1:50629
> > > >>
> > >
> >
> ","timeRange":[0,9223372036854775807],"starttimems":1378910332920,"responsesize":108211303,"class":"HRegionServer","table":"P_SO","cacheBlocks":true,"families":{"myCf":["ALL"]},"row":"myRow","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
> > > >>
> > > >> After this the RegionServer is down, nothing more. BTW I found out
> > that
> > > >> the row should have ~570000 rows. The size should be arround ~70mb
> > > >>
> > > >> Thanks
> > > >>
> > > >>
> > > >>
> > > >> 2013/9/11 Bing Jiang <[EMAIL PROTECTED]>
> > > >>
> > > >>> hi john.
> > > >>> I think it is a fresh question. Could you print the log from the
> > > >>> regionserver crashed ?
> > > >>> On Sep 11, 2013 8:38 PM, "John" <[EMAIL PROTECTED]>
> wrote:
> > > >>>
> > > >>>> Okay, I will take a look at the ColumnPaginationFilter.
> > > >>>>
> > > >>>> I tried to reproduce the error. I created a new table and add one
> > new
> > > >>> row
> > > >>>> with 250 000 columns, but everything works fine if I execute a get
> > to
> > > >>> the
> > > >>>> table. The only difference to my original programm was that I have
> > > added
> > > >>>> the data directly throught the hbase java api and not with the map
> > > >>> reduce
> > > >>>> bulk load. Maybe that can be the reason?
> > > >>>>
> > > >>>> I wonder a little bit about the hdfs structure if I compare both
> > > methods
> > > >>>> (hbase api/bulk load). If I add the data through the hbase api
> there
> > > is
> > > >>> no

Kevin O'Dell
Systems Engineer, Cloudera
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB