Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> flushing + compactions after config change


Copy link to this message
-
Re: 答复: flushing + compactions after config change
hey Viral,
Which hbase version are you using?

On Thu, Jun 27, 2013 at 5:03 PM, Anoop John <[EMAIL PROTECTED]> wrote:

> The config "hbase.regionserver.maxlogs" specifies what is the max #logs and
> defaults to 32.  But remember if there are so many log files to replay then
> the MTTR will become more (RS down case )
>
> -Anoop-
> On Thu, Jun 27, 2013 at 1:59 PM, Viral Bajaria <[EMAIL PROTECTED]
> >wrote:
>
> > Thanks Liang!
> >
> > Found the logs. I had gone overboard with my grep's and missed the "Too
> > many hlogs" line for the regions that I was trying to debug.
> >
> > A few sample log lines:
> >
> > 2013-06-27 07:42:49,602 INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog:
> > Too many hlogs: logs=33, maxlogs=32; forcing flush of 2 regions(s):
> > 0e940167482d42f1999b29a023c7c18a, 3f486a879418257f053aa75ba5b69b14
> > 2013-06-27 08:10:29,996 INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog:
> > Too many hlogs: logs=33, maxlogs=32; forcing flush of 1 regions(s):
> > 0e940167482d42f1999b29a023c7c18a
> > 2013-06-27 08:17:44,719 INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog:
> > Too many hlogs: logs=33, maxlogs=32; forcing flush of 2 regions(s):
> > 0e940167482d42f1999b29a023c7c18a, e380fd8a7174d34feb903baa97564e08
> > 2013-06-27 08:23:45,357 INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog:
> > Too many hlogs: logs=33, maxlogs=32; forcing flush of 3 regions(s):
> > 0e940167482d42f1999b29a023c7c18a, 3f486a879418257f053aa75ba5b69b14,
> > e380fd8a7174d34feb903baa97564e08
> >
> > Any pointers on what's the best practice for avoiding this scenario ?
> >
> > Thanks,
> > Viral
> >
> > On Thu, Jun 27, 2013 at 1:21 AM, 谢良 <[EMAIL PROTECTED]> wrote:
> >
> > > If  reached memstore global up-limit,  you'll find "Blocking updates
> on"
> > > in your files(see MemStoreFlusher.reclaimMemStoreMemory);
> > > If  it's caused by too many log files, you'll find "Too many hlogs:
> > > logs="(see HLog.cleanOldLogs)
> > > Hope it's helpful for you:)
> > >
> > > Best,
> > > Liang
> > >
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB