Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - Table in Inconsistent State; Perpetually pending region server transitions while loading lot of data into Hbase via MR


+
Ameya Kantikar 2012-11-01, 07:02
+
Ameya Kantikar 2012-11-01, 07:29
+
Cheng Su 2012-11-01, 08:19
+
Ameya Kantikar 2012-11-01, 08:43
+
Cheng Su 2012-11-01, 09:16
Copy link to this message
-
Re: Table in Inconsistent State; Perpetually pending region server transitions while loading lot of data into Hbase via MR
ramkrishna vasudevan 2012-11-01, 10:20
Can you try restarting the cluster i mean the master and RS.
Also if this things persists try to clear the zk data and restart.

Regards
Ram

On Thu, Nov 1, 2012 at 2:46 PM, Cheng Su <[EMAIL PROTECTED]> wrote:

> Sorry, my mistake. Ignore about the "max store size of a single CF" please.
>
> m(_ _)m
>
> On Thu, Nov 1, 2012 at 4:43 PM, Ameya Kantikar <[EMAIL PROTECTED]> wrote:
> > Thanks Cheng. I'll try increasing my max region size limit.
> >
> > However I am not clear with this math:
> >
> > "Since you set the max file size to 2G, you can only store 2XN G data
> > into a single CF."
> >
> > Why is that? My assumption is, even though single region can only be 2
> GB,
> > I can still have hundreds of regions, and hence can store 200GB+ data in
> > single CF on my 10 machine cluster.
> >
> > Ameya
> >
> >
> > On Thu, Nov 1, 2012 at 1:19 AM, Cheng Su <[EMAIL PROTECTED]> wrote:
> >
> >> I met same problem these days.
> >> I'm not very sure the error log is exactly same, but I do have the
> >> same exception
> >>
> >> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
> >> Failed 1 action: NotServingRegionException: 1 time, servers with
> >> issues: smartdeals-hbase8-snc1.snc1:60020,
> >>
> >> and the table is also neither enabled nor disabled, thus I can't drop
> it.
> >>
> >> I guess the problem is the total store size.
> >> How many region server do you have?
> >> Since you set the max file size to 2G, you can only store 2XN G data
> >> into a single CF.
> >> (N is the number of your region servers)
> >>
> >> You might want to increase the max file size or region servers.
> >>
> >> On Thu, Nov 1, 2012 at 3:29 PM, Ameya Kantikar <[EMAIL PROTECTED]>
> wrote:
> >> > One more thing, the Hbase table in question is neither enabled, nor
> >> > disabled:
> >> >
> >> > hbase(main):006:0> is_disabled 'userTable1'
> >> > false
> >> >
> >> > 0 row(s) in 0.0040 seconds
> >> >
> >> > hbase(main):007:0> is_enabled 'userTable1'
> >> > false
> >> >
> >> > 0 row(s) in 0.0040 seconds
> >> >
> >> > Ameya
> >> >
> >> > On Thu, Nov 1, 2012 at 12:02 AM, Ameya Kantikar <[EMAIL PROTECTED]>
> >> wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> I am trying to load lot of data (around 1.5 TB) into a single Hbase
> >> table.
> >> >> I have setup region size at 2 GB. I also
> >> >> set hbase.regionserver.handler.count at 30.
> >> >>
> >> >> When I start loading data via MR, after a while, tasks start failing
> >> with
> >> >> following error:
> >> >>
> >> >> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
> >> Failed 1 action: NotServingRegionException: 1 time, servers with issues:
> >> smartdeals-hbase8-snc1.snc1:60020,
> >> >>       at
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1641)
> >> >>       at
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
> >> >>       at
> >> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:943)
> >> >>       at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:820)
> >> >>       at org.apache.hadoop.hbase.client.HTable.put(HTable.java:795)
> >> >>       at
> >>
> com..mr.hbase.LoadUserCacheInHbase$TokenizerMapper.map(LoadUserCacheInHbase.java:83)
> >> >>       at
> >>
> com..mr.hbase.LoadUserCacheInHbase$TokenizerMapper.map(LoadUserCacheInHbase.java:33)
> >> >>       at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
> >> >>       at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
> >> >>       at org.apache.hadoop.mapred.MapTask.run(MapTask.j
> >> >>
> >> >> On the hbase8 machine I see following in logs:
> >> >>
> >> >> ERROR org.apache.hadoop.hbase.regionserver.wal.HLog: Error while
> >> syncing, requesting close of hlog
> >> >> java.io.IOException: Reflection
> >> >>         at
> >>
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
+
Kevin Odell 2012-11-01, 13:35
+
Ameya Kantikar 2012-11-01, 19:44
+
Kevin Odell 2012-11-01, 19:55
+
Ameya Kantikar 2012-11-01, 23:56
+
Ameya Kantikar 2012-11-03, 00:10
+
Michael Segel 2012-11-01, 14:50
+
Kevin Odell 2012-11-01, 15:35