Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Compaction problem


Copy link to this message
-
Re: Compaction problem
Hi Asaf,

What kind of results should we expect from the test you are suggesting?

I mean, how many MB/sec should we see on an healthy cluster?

Thanks,

JM

2013/3/26 Asaf Mesika <[EMAIL PROTECTED]>:
> 1st thing I would do to find the bottleneck it so benchmark HDFS solo performance.
> Create a 16GB file (using dd) which is x2 your memory and run "time hadoop fs -copyFromLocal yourFile.txt /tmp/a.txt"
> Tell us what is the speed of this file copy in MB/sec.
>
>
> On Mar 22, 2013, at 4:44 PM, tarang dawer <[EMAIL PROTECTED]> wrote:
>
>> Hi
>> As per my use case , I have to write around 100gb data , with a ingestion
>> speed of around 200 mbps. While writing , i am getting a performance hit by
>> compaction , which adds to the delay.
>> I am using a 8 core machine with 16 gb RAM available., 2 Tb hdd 7200RPM.
>> Got some idea from the archives and  tried pre splitting the regions ,
>> configured HBase with following parameters(configured the parameters in a
>> haste , so please guide me if anything's out of order) :-
>>
>>
>>        <property>
>>                <name>hbase.hregion.memstore.block.multiplier</name>
>>                <value>4</value>
>>        </property>
>>        <property>
>>                 <name>hbase.hregion.memstore.flush.size</name>
>>                 <value>1073741824</value>
>>        </property>
>>
>>        <property>
>>                <name>hbase.hregion.max.filesize</name>
>>                <value>1073741824</value>
>>        </property>
>>        <property>
>>                <name>hbase.hstore.compactionThreshold</name>
>>                <value>5</value>
>>        </property>
>>        <property>
>>              <name>hbase.hregion.majorcompaction</name>
>>                  <value>0</value>
>>        </property>
>>        <property>
>>                <name>hbase.hstore.blockingWaitTime</name>
>>                <value>30000</value>
>>        </property>
>>         <property>
>>                 <name>hbase.hstore.blockingStoreFiles</name>
>>                 <value>200</value>
>>         </property>
>>
>>  <property>
>>        <name>hbase.regionserver.lease.period</name>
>>        <value>3000000</value>
>>  </property>
>>
>>
>> but still m not able to achieve the optimal rate , getting around 110 mbps.
>> Need some optimizations ,so please could you help out ?
>>
>> Thanks
>> Tarang Dawer
>>
>>
>>
>>
>>
>> On Fri, Mar 22, 2013 at 6:05 PM, Jean-Marc Spaggiari <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Hi Tarang,
>>>
>>> I will recommand you to take a look at the list archives first to see
>>> all the discussions related to compaction. You will found many
>>> interesting hints and tips.
>>>
>>>
>>> http://search-hadoop.com/?q=compactions&fc_project=HBase&fc_type=mail+_hash_+user
>>>
>>> After that, you will need to provide more details regarding how you
>>> are using HBase and how the compaction is impacting you.
>>>
>>> JM
>>>
>>> 2013/3/22 tarang dawer <[EMAIL PROTECTED]>:
>>>> Hi
>>>> I am using HBase 0.94.2 currently. While using it  , its write
>>> performance,
>>>> due to compaction is being affeced by compaction.
>>>> Please could you suggest some quick tips in relation to how to deal with
>>> it
>>>> ?
>>>>
>>>> Thanks
>>>> Tarang Dawer
>>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB