Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Uneven write request to regions


+
Asaf Mesika 2013-11-14, 08:59
+
Jia Wang 2013-11-14, 10:06
Copy link to this message
-
Re: Uneven write request to regions
It's from the same table.
The thing is that some <customerId> simply have less data saved in HBase,
while others have x50 (max) data.
I'm trying to check how people designed their rowkey around it, or had
other out-of-the-box solution for it.

On Thu, Nov 14, 2013 at 12:06 PM, Jia Wang <[EMAIL PROTECTED]> wrote:

> Hi
>
> Are the regions from the same table? If it was, check your row key design,
> you can find the start and end row key for each region, from which you can
> know why your request with a specific row key doesn't hit a specified
> region.
>
> If the regions are for different table, you may consider to combine some
> cold regions for some tables.
>
> Thanks
> Ramon
>
>
> On Thu, Nov 14, 2013 at 4:59 PM, Asaf Mesika <[EMAIL PROTECTED]>
> wrote:
>
> > Hi,
> >
> > Have anyone ran into a case where a Region Server is hosting regions, in
> > which some regions are getting lots of write requests, and the rest gets
> > maye 1/1000 of the rate of write requests?
> >
> > This leads to a situation where the HLog queue reaches its maxlogs limit
> > since, those HLogs containing the puts from slow-write regions are
> "stuck"
> > until the region will flush. Since those regions barely make it to their
> > 256MB flush limit (our configuration), they won't flush. The HLogs queue
> > gets bigger due to the fast-write regions, until reaches the stress mode
> of
> > "We have too many logs".
> > This in turn flushes out lots of regions, many of them (about 100) are
> > ultra small (10k - 3mb). After 3 rounds like this, the compaction queue
> > gets very big....in the end the region server drops dead, and this load
> > somehow is moved to another RS, ...
> >
> > We are running 0.94.7 with 30 RS.
> >
> > I was wondering how did people handled a mix of slow-write-rate and
> > high-write-rate of regions in 1 RS? I was thinking of writing a customer
> > load balancer, which keeps tabs on the write request count and memstore
> > size, and move all the slow-write regions to 20% of cluster RS dedicated
> to
> > slow regions, thus releasing the fast write regions to work freely.
> >
> > Since this issue is hammering our production, we're about to try to
> > shut-down the WAL, and risk losing some information in those slow-write
> > regions until we can come up with a better solution.
> >
> > Any advice would be highly appreciated.
> >
> > Oh - our rowkey is quite normal:
> > <customerId><bucket><Timestamp><uniqueId>
> >
> > Thanks!
> >
>
+
Jia Wang 2013-11-15, 01:51
+
Bharath Vissapragada 2013-11-15, 03:39
+
Asaf Mesika 2013-11-15, 21:28
+
Ted Yu 2013-11-15, 21:34
+
Asaf Mesika 2013-11-16, 05:56
+
Ted Yu 2013-11-16, 06:16
+
Asaf Mesika 2013-11-16, 18:41
+
Mike Axiak 2013-11-16, 17:25
+
Asaf Mesika 2013-11-16, 19:16
+
Himanshu Vashishtha 2013-11-20, 01:05
+
Asaf Mesika 2013-11-20, 06:00
+
Otis Gospodnetic 2013-11-20, 15:43
+
Tom Brown 2013-11-20, 17:04
+
Asaf Mesika 2013-11-20, 17:14
+
Ted Yu 2013-11-20, 17:17
+
Asaf Mesika 2013-11-20, 17:01
+
Tom Brown 2013-11-16, 05:18
+
Asaf Mesika 2013-11-16, 05:59
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB