Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # dev - [VOTE] The 1st hbase-0.96.0 release candidate is available for download


Copy link to this message
-
Re: [VOTE] The 1st hbase-0.96.0 release candidate is available for download
Jean-Marc Spaggiari 2013-09-03, 13:57
There was a typo in my log4j.properties :(

So it's working fine.

The only INFO logs I still see are those one:
2013-09-03 09:53:07,313 INFO  [M:0;t430s:45176] mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-09-03 09:53:07,350 INFO  [M:0;t430s:45176] mortbay.log: jetty-6.1.26
But there is only very few of them.

Performances wise, here are the numbers (the higher, the better. Rows per
seconds, expect for scans where it's rows/min). As you will see, 0.96 is
slower only for RandomSeekScanTest (way slower) and RandomScanWithRange10
but is faster for everything else. I ran the tests with the default
settings. So I think we should look at RandomSeekScanTest but expect this
one, everything else is pretty good.

Also, I have been able to reproduce this exception:
2013-09-03 09:55:36,718 WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid
0x140e4191edb0009, likely client has closed socket
    at
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
    at
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
    at java.lang.Thread.run(Thread.java:662)

Just had to run PE and kill it in the middle.

JM

0.96.0RC0 0.94.11
 org.apache.hadoop.hbase.PerformanceEvaluation$FilteredScanTest 10.28 10.17
101.12%  org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest 966.01
810.58 119.18%
org.apache.hadoop.hbase.PerformanceEvaluation$RandomSeekScanTest 98.50
255.71 38.52%  org.apache.hadoop.hbase.PerformanceEvaluation$RandomWriteTest
39251.17 25682.11 152.83%
org.apache.hadoop.hbase.PerformanceEvaluation$RandomScanWithRange10Test
25844.88 28715.29 90.00%
org.apache.hadoop.hbase.PerformanceEvaluation$RandomScanWithRange100Test
20029.48 18022.39 111.14%
org.apache.hadoop.hbase.PerformanceEvaluation$RandomScanWithRange1000Test
2692.16 2346.85 114.71%
org.apache.hadoop.hbase.PerformanceEvaluation$SequentialReadTest 3002.18
2875.83 104.39%
org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest 38995.50
26693.23 146.09%
2013/9/3 Stack <[EMAIL PROTECTED]>

> On Mon, Sep 2, 2013 at 10:51 AM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:
>
> > I have created:
> >  HBASE-9412
> >  HBASE-9413
> >  HBASE-9414
> >
> > I have not been able yet to reproduce the ZK error. I'm trying.
> >
> >
> Is it when you have a shell connection and then kill it?
>
>
>
> > Last, I tried, with no success, to set loglevel to WARN to remove all
> > DEBUG and INFO logs. Setting it to WARN remove the DEBUG lines, but I
> > keep getting the INFO. Seems that something is setting the log level
> > somewhere else, or it's not read.
> >
> > Here is my log4j.properties file. I removed all the customed log level
> > to setup WARN for org.apache. And it's still showing INFO...
> >
> >
>
> You did it by editing log4j and restarting or in the UI?  I think the UI
> log level setting is broke.... (new issue!)
>
> Thanks for trying it out JMS,
>
> So everything is slower in 0.96?
> St.Ack
>
>
>
> > JM
> >
> >
> > # Define some default values that can be overridden by system properties
> > hbase.root.logger=WARN,console
> > hbase.security.logger=WARN,console
> > hbase.log.dir=.
> > hbase.log.file=hbase.log
> >
> > # Define the root logger to the system property "hbase.root.logger".
> > log4j.rootLogger=${hbase.root.logger}
> >
> > # Logging Threshold
> > log4j.threshold=ALL
> >
> > #
> > # Daily Rolling File Appender
> > #
> > log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
> >
> > # Rollver at midnight
> > log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
> >
> > # 30-day backup
> > #log4j.appender.DRFA.MaxBackupIndex=30
> > log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
> >
> > # Pattern format: Date LogLevel LoggerName LogMessage
> > log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: