-Re: HBase log grows very fast after stopped hadoop (due to connection exception)
I will try the same scenario on another cluster tomorrow, and will create a jira if can consistently reproduce
Demai on the run
On Feb 4, 2014, at 4:38 PM, Stack <[EMAIL PROTECTED]> wrote:
> Logging at the rate you report is obnoxious. We should recognize HDFS is
> gone and backoff some.
> On Tue, Feb 4, 2014 at 2:09 PM, Demai Ni <[EMAIL PROTECTED]> wrote:
>> we are using hbase 96.0(also saw the same issue on 94.x) on single node
>> cluster. At some point, we stopped Hadoop, but keep hbase rerunning. As
>> expected, hbase began to throw following errors:
>> 2014-02-04 11:05:12,820 ERROR
>> org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while
>> processing event M_META_SERVER_SHUTDOWN
>> Caused by: java.net.ConnectException: Call From
>> bdvm311.svl.ibm.com/188.8.131.52 to bdvm311.svl.ibm.com:9000 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see: http://wiki.apache.org/hadoop/ConnectionRefused
>> the whole exception log is pasted here: http://pastebin.com/9jfvfSmA
>> while everything(error/exception) is valid, hbase keep retrying, and the
>> log files grow to 25 GB within 3 hours.
>> So, I am wondering whether there is a configuration to reduce the frequency
>> of retry, or maybe we can twist the code to do it automatically?
>> thank for your suggestion