Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> hbase-0.94.6.1 balancer issue


Copy link to this message
-
Re: hbase-0.94.6.1 balancer issue
I have just created 50 tables and they got distributed on different nodes
(8) at the create time.

I ran the balancer manually and they are still correctly distributed all
over the cluster.

But Samir tried with only 2 nodes. I don't know if this might change the
results or not

JM.

2013/4/12 Jean-Daniel Cryans <[EMAIL PROTECTED]>

> Samir,
>
> When you say "And at what point balancer will start redistribute regions to
> second server", do you mean that when you look at the master's web UI you
> see that one region server has 0 region? That would be a problem. Else,
> that line you posted in your original message should be repeated for each
> table, and globally the regions should all be correctly distributed...
> unless there's an edge case where when you have only tables with 1 region
> it puts them all on the same server :)
>
> Thx,
>
> J-D
>
>
> On Fri, Apr 12, 2013 at 12:37 PM, Samir Ahmic <[EMAIL PROTECTED]>
> wrote:
>
> > Thanks for explaining Jean-Marc,
> >
> > We are using 0.90.4 for very long time and balancing was based on total
> > number of regions.That is why i was surprised with balancer log on 0.94.
> > Well i'm more ops guy then dev i handle what other develop :)
> >
> > Regards
> >
> >
> > On Fri, Apr 12, 2013 at 6:24 PM, Jean-Marc Spaggiari <
> > [EMAIL PROTECTED]> wrote:
> >
> > > Hi Samir,
> > >
> > > Since regions are balanced per table, as soon as you will have more
> than
> > > one region in your table, balancer will start to balance the regions
> over
> > > the servers.
> > >
> > > You can split some of those tables and will you start to see HBase
> > balance
> > > them. This is normal behavior for 0.94. I don't know for versions
> before
> > > that.
> > >
> > > Also, are you sure you need 48 tables? And not less tables with more
> CFs?
> > >
> > > JM
> > >
> > > 2013/4/12 Samir Ahmic <[EMAIL PROTECTED]>
> > >
> > > > Hi, JM
> > > >
> > > > I have 48 tables and as you said it is 1 region per table since i did
> > not
> > > > reach splitting limit yet. So this is normal behavior  in 0.94.6.1
> > > version
> > > > ?  And at what point balancer will start redistribute regions to
> second
> > > > server ?
> > > >
> > > > Thanks
> > > > Samir
> > > >
> > > >
> > > > On Fri, Apr 12, 2013 at 6:06 PM, Jean-Marc Spaggiari <
> > > > [EMAIL PROTECTED]> wrote:
> > > >
> > > > > Hi Samir,
> > > > >
> > > > > Regions are balancer per table.
> > > > >
> > > > > So if you have 48 regions within the same table, it should be split
> > > about
> > > > > 24 on each server.
> > > > >
> > > > > But if you have 48 tables with 1 region each, the for each table,
> the
> > > > > balancer will see only 1 region and will display the message you
> saw.
> > > > >
> > > > > Have you looked at the UI? What do you have in it? Can you please
> > > confirm
> > > > > if yo uhave 48 tables or 1 table?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > JM
> > > > >
> > > > >
> > > > > 2013/4/12 Samir Ahmic <[EMAIL PROTECTED]>
> > > > >
> > > > > > Hi, all
> > > > > >
> > > > > > I'm evaluating hbase-0.94.6.1 and i have 48 regions on 2 node
> > > cluster.
> > > > I
> > > > > > was restarting on of RSs and after that tried to balance cluster
> by
> > > > > running
> > > > > > balancer from shell. After running command regions were not
> > > distributed
> > > > > to
> > > > > > second RS and i found this line i master log:
> > > > > >
> > > > > > 2013-04-12 16:45:15,589 INFO
> > > > org.apache.hadoop.hbase.master.LoadBalancer:
> > > > > > Skipping load balancing because balanced cluster; servers=2
> > > *regions=1
> > > > > > *average=0.5
> > > > > > mostloaded=1 leastloaded=0
> > > > > >
> > > > > > This look like to me that wrong number of regions is reported by
> > > > balancer
> > > > > > and that cause of  skipping load balancing . In hbase shell i see
> > all
> > > > 48
> > > > > > tables that i have and everything else looks fine.
> > > > > >
> > > > > > Did someone else see this type of behavior ? Did something