Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> HBase Random Read latency > 100ms


Copy link to this message
-
Re: HBase Random Read latency > 100ms
Hi Bharath,

I am little confused about the metrics displayed by Cloudera. Even when
there are no oeprations, the gc_time metric is showing 2s constant in the
graph. Is this the CMS gc_time (in that case no JVm pause) or the GC pause.

GC timings reported earlier is the average taken for gc_time metric for all
region servers.

Regards,
Ramu
On Mon, Oct 7, 2013 at 9:10 PM, Ramu M S <[EMAIL PROTECTED]> wrote:

> Jean,
>
> Yes. It is 2 drives.
>
> - Ramu
>
>
> On Mon, Oct 7, 2013 at 8:45 PM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
>> Quick questionon the disk side.
>>
>> When you say:
>> 800 GB SATA (7200 RPM) Disk
>> Is it 1x800GB? It's raid 1, so might be 2 drives? What's the
>> configuration?
>>
>> JM
>>
>>
>> 2013/10/7 Ramu M S <[EMAIL PROTECTED]>
>>
>> > Lars, Bharath,
>> >
>> > Compression is disabled for the table. This was not intended from the
>> > evaluation.
>> > I forgot to mention that during table creation. I will enable snappy
>> and do
>> > major compaction again.
>> >
>> > Please suggest other options to try out and also suggestions for the
>> > previous questions.
>> >
>> > Thanks,
>> > Ramu
>> >
>> >
>> > On Mon, Oct 7, 2013 at 6:35 PM, Ramu M S <[EMAIL PROTECTED]> wrote:
>> >
>> > > Bharath,
>> > >
>> > > I was about to report this. Yes indeed there is too much of GC time.
>> > > Just verified the GC time using Cloudera Manager statistics(Every
>> minute
>> > > update).
>> > >
>> > > For each Region Server,
>> > >  - During Read: Graph shows 2s constant.
>> > >  - During Compaction: Graph starts with 7s and goes as high as 20s
>> during
>> > > end.
>> > >
>> > > Few more questions,
>> > > 1. For the current evaluation, since the reads are completely random
>> and
>> > I
>> > > don't expect to read same data again can I set the Heap to the
>> default 1
>> > GB
>> > > ?
>> > >
>> > > 2. Can I completely turn off BLOCK CACHE for this table?
>> > >     http://hbase.apache.org/book/regionserver.arch.html recommends
>> that
>> > > for Randm reads.
>> > >
>> > > 3. But in the next phase of evaluation, We are interested to use
>> HBase as
>> > > In-memory KV DB by having the latest data in RAM (To the tune of
>> around
>> > 128
>> > > GB in each RS, we are setting up 50-100 Node Cluster). I am very
>> curious
>> > to
>> > > hear any suggestions in this regard.
>> > >
>> > > Regards,
>> > > Ramu
>> > >
>> > >
>> > > On Mon, Oct 7, 2013 at 5:50 PM, Bharath Vissapragada <
>> > > [EMAIL PROTECTED]> wrote:
>> > >
>> > >> Hi Ramu,
>> > >>
>> > >> Thanks for reporting the results back. Just curious if you are
>> hitting
>> > any
>> > >> big GC pauses due to block cache churn on such large heap. Do you see
>> > it ?
>> > >>
>> > >> - Bharath
>> > >>
>> > >>
>> > >> On Mon, Oct 7, 2013 at 1:42 PM, Ramu M S <[EMAIL PROTECTED]>
>> wrote:
>> > >>
>> > >> > Lars,
>> > >> >
>> > >> > After changing the BLOCKSIZE to 16KB, the latency has reduced a
>> > little.
>> > >> Now
>> > >> > the average is around 75ms.
>> > >> > Overall throughput (I am using 40 Clients to fetch records) is
>> around
>> > 1K
>> > >> > OPS.
>> > >> >
>> > >> > After compaction hdfsBlocksLocalityIndex is
>> 91,88,78,90,99,82,94,97 in
>> > >> my 8
>> > >> > RS respectively.
>> > >> >
>> > >> > Thanks,
>> > >> > Ramu
>> > >> >
>> > >> >
>> > >> > On Mon, Oct 7, 2013 at 3:51 PM, Ramu M S <[EMAIL PROTECTED]>
>> > wrote:
>> > >> >
>> > >> > > Thanks Lars.
>> > >> > >
>> > >> > > I have changed the BLOCKSIZE to 16KB and triggered a major
>> > >> compaction. I
>> > >> > > will report my results once it is done.
>> > >> > >
>> > >> > > - Ramu
>> > >> > >
>> > >> > >
>> > >> > > On Mon, Oct 7, 2013 at 3:21 PM, lars hofhansl <[EMAIL PROTECTED]>
>> > >> wrote:
>> > >> > >
>> > >> > >> First of: 128gb heap per RegionServer. Wow.I'd be interested to
>> > hear
>> > >> > your
>> > >> > >> experience with such a large heap for your RS. It's definitely
>> big
>> > >> > enough.
>> > >> > >>
>> > >> > >>
>> > >> > >> It's interesting hat 100gb do fit into the aggregate cache (of
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB