Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Row get very slow


Copy link to this message
-
Re: Row get very slow
Hi,

It speed it up definitly :)

hbase(main):002:0> get 'logs',
'_f:squid_t:20111114110759_b:squid_s:204-taDiFMcQaPzN13dDOZ99PA=='
COLUMN                                                CELL
  body:body                                            
timestamp=1321265279234, value=Nov 14 11:00:24 haproxy[15470]: ...
[haproxy syslogs] ...

1 row(s) in 0.0170 seconds

Thank you again for help and explanations.

Regards,

--
Damien
Le 14/11/2011 20:24, lars hofhansl a écrit :
> Did it speed up your queries? As you can see from the followup discussions here, there is some general confusion around this.
>
> Generally there are 2 sizes involved:
> 1. HBase Filesize
> 2. HBase Blocksize
>
> #1 sets the maximum size of a region before it is split. Default used to be 512mb, it's now 1g (but usually it should be even larger)
>
> #2 is the size of the blocks inside the HFiles. Smaller blocks mean better random access, but larger block indexes. I would only increase that if you have large cells.
>
> -- Lars
> ________________________________
>
> From: Damien Hardy<[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Sent: Monday, November 14, 2011 12:51 AM
> Subject: Re: Row get very slow
>
> Le 13/11/2011 16:13, Arvind Jayaprakash a écrit :
>> A common confusion is b/w MAX_FILESIZE and BLOCKSIZE. Given that
>> MAX_FILESIZE is not listed on :60010/master.jsp, one tends to assume
>> BLOCKSIZE represents that value.
>>
>> On Nov 10, lars hofhansl wrote:
>>> "BLOCKSIZE =>   '536870912'"
>>>
>>>
>>> You set your blocksize to 512mb? The default is 64k (65536), try to set it to something lower.
>
> Hello,
>
> Thank you for answer I have just altered my table and launched a major_compact to get it effective.
>
> I thought that increasing FILSIZE of HBases implies somehow changes on the BLOSKSIZE of my tables and to prevent unbalanced paramaters increased it too ... #FAIL.
>
> The question is : in what application BLOCKSIZE should be changed (increased or decreased) ?
>
> Thank you.
>
> -- Damien
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB