Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Row get very slow


Copy link to this message
-
Re: Row get very slow
Le 13/11/2011 16:13, Arvind Jayaprakash a écrit :
> A common confusion is b/w MAX_FILESIZE and BLOCKSIZE. Given that
> MAX_FILESIZE is not listed on :60010/master.jsp, one tends to assume
> BLOCKSIZE represents that value.
>
> On Nov 10, lars hofhansl wrote:
>> "BLOCKSIZE =>  '536870912'"
>>
>>
>> You set your blocksize to 512mb? The default is 64k (65536), try to set it to something lower.
Hello,

Thank you for answer I have just altered my table and launched a
major_compact to get it effective.

I thought that increasing FILSIZE of HBases implies somehow changes on
the BLOSKSIZE of my tables and to prevent unbalanced paramaters
increased it too ... #FAIL.

The question is : in what application BLOCKSIZE should be changed
(increased or decreased) ?

Thank you.

--
Damien
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB