Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Enabling compression


Copy link to this message
-
Re: Enabling compression
You also need to install Snappy - the Shared Object. I've done it using
"yum install snappy" on Fedora Core.

Sent from my iPad

On 25 ביול 2012, at 04:40, Dhaval Shah <[EMAIL PROTECTED]> wrote:

Yes you need to add the snappy libraries to hbase path (i think the
variable to set is called HBASE_LIBRARY_PATH)

------------------------------
On Wed 25 Jul, 2012 3:46 AM IST Mohit Anchlia wrote:

On Tue, Jul 24, 2012 at 2:04 PM, Dhaval Shah <[EMAIL PROTECTED]
>wrote:
I bet that your compression libraries are not available to HBase.. Run the

compression test utility and see if it can find LZO
That seems to be the case for SNAPPY. However, I do have snappy installed

and it works with hadoop just fine and HBase is running on the same

cluster. Is there something special I need to do for HBase?

Regards,

Dhaval

----- Original Message -----

From: Mohit Anchlia <[EMAIL PROTECTED]>

To: [EMAIL PROTECTED]

Cc:

Sent: Tuesday, 24 July 2012 4:39 PM

Subject: Re: Enabling compression
Thanks! I was trying it out and I see this message when I use COMPRESSION,

but it works when I don't use it. Am I doing something wrong?

hbase(main):012:0> create 't2', {NAME => 'f1', VERSIONS => 1, COMPRESSION

=> 'LZO'}
ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1

regions are online; retries exhausted.
hbase(main):014:0> create 't3', {NAME => 'f1', VERSIONS => 1}
0 row(s) in 1.1260 seconds

On Tue, Jul 24, 2012 at 1:37 PM, Jean-Daniel Cryans <[EMAIL PROTECTED]

wrote:
On Tue, Jul 24, 2012 at 1:34 PM, Jean-Marc Spaggiari

<[EMAIL PROTECTED]> wrote:

Also, if I understand it correctly, this will enable the compression

for the new put but will not compresse the actual cells already stored

right? For that, we need to run a major compaction of the table which

will rewrite all the cells and so compact them?
Yeah, although you may not want to recompact everything all at once in

a live system. You can just let it happen naturally through cycles of

flushes and compactions, it's all fine.
J-D
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB