Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Table in Inconsistent State; Perpetually pending region server transitions while loading lot of data into Hbase via MR


Copy link to this message
-
Re: Table in Inconsistent State; Perpetually pending region server transitions while loading lot of data into Hbase via MR
Kevin O'dell 2012-11-01, 19:55
Ameya,

 If your new table goes well(did you presplit this time?), then what we can
do for the old one:

rm /hbase/tablename
hbck -fixMeta -fixAssignments
restart HBase if it is still present
All should be well.

Please let us know how it goes.

On Thu, Nov 1, 2012 at 2:44 PM, Ameya Kantikar <[EMAIL PROTECTED]> wrote:

> Thanks Kevin & Ram. Please find my answers below:
>
> Did you presplit your table? - NO
>
> You are on .92, might as well take advantage of HFilev2 and use 10GB region
> sizes -
>
>  - I have put my region size now at 10GB and running another load in a
> separate table, but my existing table is still in bad shape.
>
> Loading over MR, I am assuming puts?
> -Yes
>
> Did you tune your memstore and Hlog
> size?
> -Not yet. I am running with whatever are the defaults.
>
> You aren't using a different client version or something strange like that
> are you? - Nope. Its the same jar everywhere.
>
> You can't close hlog messages seem to indicate an inability to talk to
> HDFS.  Did you have connection issues there?
> - I did find log on 1 data node with some HDFS issue. But that was only 1
> data node. All other data node looked good.
> Note, I also ran another big distcp job on the same cluster and did not
> find any issues.
>
> I also restarted the cluster (all nodes, including hadoop), hbase hbck is
> not showing inconsistencies, but my table is still neither enabled nor
> disabled.
> I ran MR job to load data, but it continued to throw same earlier errors.
>
> Now I am running separate job loading data into brand new table, with max
> region size at 10 GB. I'll get back to you with results on that one. But
> existing table is still not reachable.
>
> Thanks for your help.
>
> Ameya
>
>
>
>
>
> On Thu, Nov 1, 2012 at 6:35 AM, Kevin O'dell <[EMAIL PROTECTED]
> >wrote:
>
> > Couple thoughts(it is still early here so bear with me):
> >
> > Did you presplit your table?
> >
> > You are on .92, might as well take advantage of HFilev2 and use 10GB
> region
> > sizes
> >
> > Loading over MR, I am assuming puts?  Did you tune your memstore and Hlog
> > size?
> >
> > You aren't using a different client version or something strange like
> that
> > are you?
> >
> > You can't close hlog messages seem to indicate an inability to talk to
> > HDFS.  Did you have connection issues there?
> >
> >
> >
> > On Thu, Nov 1, 2012 at 5:20 AM, ramkrishna vasudevan <
> > [EMAIL PROTECTED]> wrote:
> >
> > > Can you try restarting the cluster i mean the master and RS.
> > > Also if this things persists try to clear the zk data and restart.
> > >
> > > Regards
> > > Ram
> > >
> > > On Thu, Nov 1, 2012 at 2:46 PM, Cheng Su <[EMAIL PROTECTED]> wrote:
> > >
> > > > Sorry, my mistake. Ignore about the "max store size of a single CF"
> > > please.
> > > >
> > > > m(_ _)m
> > > >
> > > > On Thu, Nov 1, 2012 at 4:43 PM, Ameya Kantikar <[EMAIL PROTECTED]>
> > > wrote:
> > > > > Thanks Cheng. I'll try increasing my max region size limit.
> > > > >
> > > > > However I am not clear with this math:
> > > > >
> > > > > "Since you set the max file size to 2G, you can only store 2XN G
> data
> > > > > into a single CF."
> > > > >
> > > > > Why is that? My assumption is, even though single region can only
> be
> > 2
> > > > GB,
> > > > > I can still have hundreds of regions, and hence can store 200GB+
> data
> > > in
> > > > > single CF on my 10 machine cluster.
> > > > >
> > > > > Ameya
> > > > >
> > > > >
> > > > > On Thu, Nov 1, 2012 at 1:19 AM, Cheng Su <[EMAIL PROTECTED]>
> > wrote:
> > > > >
> > > > >> I met same problem these days.
> > > > >> I'm not very sure the error log is exactly same, but I do have the
> > > > >> same exception
> > > > >>
> > > > >>
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
> > > > >> Failed 1 action: NotServingRegionException: 1 time, servers with
> > > > >> issues: smartdeals-hbase8-snc1.snc1:60020,
> > > > >>
> > > > >> and the table is also neither enabled nor disabled, thus I can't

Kevin O'Dell
Customer Operations Engineer, Cloudera