Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Switching existing table to Snappy possible?


Copy link to this message
-
Re: Switching existing table to Snappy possible?
>From what I understand, the online schema update feature (0.92.x onwards)
would allow you to do this without disabling tables. It's experimental in
0.92.

On Thu, May 10, 2012 at 9:02 AM, Jeff Whiting <[EMAIL PROTECTED]> wrote:

> We really need to be able to do this type of thing online.  Taking your
> table down just so you can change the compression/bloom/whatever isn't very
> cool for a production cluster.
>
> My $0.02
>
> ~Jeff
>
>
> On 5/9/2012 10:10 PM, Harsh J wrote:
>
>> Jiajun,
>>
>> Expanding on Jean's guideline (and perhaps the following can be used
>> for the manual as well):
>>
>> 1. Ensure snappy codec works properly, following the test listed on
>> http://hbase.apache.org/book/**snappy.compression.html<http://hbase.apache.org/book/snappy.compression.html>(On all RSes to
>> be sure)
>>
>> 2. Disable the table to prepare for alteration.
>>
>>  disable 't'
>>>
>> 2. Change the the CF properties to use Snappy. Use the alter command,
>> for each of the colfams:
>>
>>  alter 'table', {NAME=>'f1', COMPRESSION=>'SNAPPY'}, {NAME=>'f2',
>>> COMPRESSION=>'SNAPPY'} …
>>>
>> 3. Re-enable the table.
>>
>>  enable 't'
>>>
>> (Make sure to test the table. And do not remove away the old codec
>> immediately. You need to wait until the whole of the table's regions
>> have major compacted, leaving no old-codec-encoded store file traces.
>> Correct me if am wrong there!)
>>
>> On Thu, May 10, 2012 at 7:26 AM, Jiajun Chen<[EMAIL PROTECTED]>
>>  wrote:
>>
>>> How to convert  snappy on existing table .used lzo compression ?
>>>
>>> On 10 May 2012 05:27, Doug Meil<doug.meil@**explorysmedical.com<[EMAIL PROTECTED]>>
>>>  wrote:
>>>
>>>  I'll update the RefGuide with this.  This is a good thing for everybody
>>>> to
>>>> know.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 5/9/12 5:08 PM, "Jean-Daniel Cryans"<[EMAIL PROTECTED]>  wrote:
>>>>
>>>>  Just alter the families, the old store files will get converted during
>>>>> compaction later on.
>>>>>
>>>>> J-D
>>>>>
>>>>> On Wed, May 9, 2012 at 2:06 PM, Otis Gospodnetic
>>>>> <[EMAIL PROTECTED]>  wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Based on the example on
>>>>>> http://hbase.apache.org/book/**snappy.compression.html<http://hbase.apache.org/book/snappy.compression.html>and some
>>>>>> search-hadoop.com searches I'm guessing it's not possible to switch
>>>>>> an
>>>>>> existing HBase table to use Snappy.
>>>>>> i.e. a new table with snappy needs to be created and old data imported
>>>>>> into this table.
>>>>>>
>>>>>> Is this correct?
>>>>>>
>>>>>> Thanks,
>>>>>> Otis
>>>>>> ----
>>>>>> Performance Monitoring for Solr / ElasticSearch / HBase -
>>>>>> http://sematext.com/spm
>>>>>>
>>>>>
>>>>
>>>>
>>
>>
> --
> Jeff Whiting
> Qualtrics Senior Software Engineer
> [EMAIL PROTECTED]
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB