Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Table in Inconsistent State; Perpetually pending region server transitions while loading lot of data into Hbase via MR


Copy link to this message
-
Re: Table in Inconsistent State; Perpetually pending region server transitions while loading lot of data into Hbase via MR
Couple thoughts(it is still early here so bear with me):

Did you presplit your table?

You are on .92, might as well take advantage of HFilev2 and use 10GB region
sizes

Loading over MR, I am assuming puts?  Did you tune your memstore and Hlog
size?

You aren't using a different client version or something strange like that
are you?

You can't close hlog messages seem to indicate an inability to talk to
HDFS.  Did you have connection issues there?

On Thu, Nov 1, 2012 at 5:20 AM, ramkrishna vasudevan <
[EMAIL PROTECTED]> wrote:

> Can you try restarting the cluster i mean the master and RS.
> Also if this things persists try to clear the zk data and restart.
>
> Regards
> Ram
>
> On Thu, Nov 1, 2012 at 2:46 PM, Cheng Su <[EMAIL PROTECTED]> wrote:
>
> > Sorry, my mistake. Ignore about the "max store size of a single CF"
> please.
> >
> > m(_ _)m
> >
> > On Thu, Nov 1, 2012 at 4:43 PM, Ameya Kantikar <[EMAIL PROTECTED]>
> wrote:
> > > Thanks Cheng. I'll try increasing my max region size limit.
> > >
> > > However I am not clear with this math:
> > >
> > > "Since you set the max file size to 2G, you can only store 2XN G data
> > > into a single CF."
> > >
> > > Why is that? My assumption is, even though single region can only be 2
> > GB,
> > > I can still have hundreds of regions, and hence can store 200GB+ data
> in
> > > single CF on my 10 machine cluster.
> > >
> > > Ameya
> > >
> > >
> > > On Thu, Nov 1, 2012 at 1:19 AM, Cheng Su <[EMAIL PROTECTED]> wrote:
> > >
> > >> I met same problem these days.
> > >> I'm not very sure the error log is exactly same, but I do have the
> > >> same exception
> > >>
> > >> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
> > >> Failed 1 action: NotServingRegionException: 1 time, servers with
> > >> issues: smartdeals-hbase8-snc1.snc1:60020,
> > >>
> > >> and the table is also neither enabled nor disabled, thus I can't drop
> > it.
> > >>
> > >> I guess the problem is the total store size.
> > >> How many region server do you have?
> > >> Since you set the max file size to 2G, you can only store 2XN G data
> > >> into a single CF.
> > >> (N is the number of your region servers)
> > >>
> > >> You might want to increase the max file size or region servers.
> > >>
> > >> On Thu, Nov 1, 2012 at 3:29 PM, Ameya Kantikar <[EMAIL PROTECTED]>
> > wrote:
> > >> > One more thing, the Hbase table in question is neither enabled, nor
> > >> > disabled:
> > >> >
> > >> > hbase(main):006:0> is_disabled 'userTable1'
> > >> > false
> > >> >
> > >> > 0 row(s) in 0.0040 seconds
> > >> >
> > >> > hbase(main):007:0> is_enabled 'userTable1'
> > >> > false
> > >> >
> > >> > 0 row(s) in 0.0040 seconds
> > >> >
> > >> > Ameya
> > >> >
> > >> > On Thu, Nov 1, 2012 at 12:02 AM, Ameya Kantikar <[EMAIL PROTECTED]>
> > >> wrote:
> > >> >
> > >> >> Hi,
> > >> >>
> > >> >> I am trying to load lot of data (around 1.5 TB) into a single Hbase
> > >> table.
> > >> >> I have setup region size at 2 GB. I also
> > >> >> set hbase.regionserver.handler.count at 30.
> > >> >>
> > >> >> When I start loading data via MR, after a while, tasks start
> failing
> > >> with
> > >> >> following error:
> > >> >>
> > >> >>
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
> > >> Failed 1 action: NotServingRegionException: 1 time, servers with
> issues:
> > >> smartdeals-hbase8-snc1.snc1:60020,
> > >> >>       at
> > >>
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1641)
> > >> >>       at
> > >>
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
> > >> >>       at
> > >> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:943)
> > >> >>       at
> org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:820)
> > >> >>       at org.apache.hadoop.hbase.client.HTable.put(HTable.java:795)
> > >> >>       at
> > >>
> >
> com..mr.hbase.LoadUserCacheInHbase$TokenizerMapper.map(LoadUserCacheInHbase.java:83)

Kevin O'Dell
Customer Operations Engineer, Cloudera
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB