Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Constant error when putting large data into HBase


Copy link to this message
-
Re: Constant error when putting large data into HBase
Hi Ed,

You need to be more precise I am afraid. First of all what does "some node always dies" mean? Is the process gone? Which process is gone?
And the "error" you pasted is a WARN level log that *might* indicate some trouble, but is *not* the reason the "node has died". Please elaborate.

Also consider posting the last few hundred lines of the process logs to pastebin so that someone can look at it.

Thanks,
Lars
On Dec 1, 2011, at 9:48 AM, edward choi wrote:

> Hi,
> I've had a problem that has been killing for some days now.
> I am using CDH3 update2 version of Hadoop and Hbase.
> When I do a large amount of bulk loading into Hbase, some node always die.
> It's not just one particular node.
> But one of many nodes fail to serve eventually.
>
> I set 4 gigs of heap space for master, and regionservers. I monitored the
> process and when any node fails, it has not used all the heaps yet.
> So it is not a heap space problem.
>
> Below is what I get when I perform bulk put using MapReduce.
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> 11/12/01 17:17:20 INFO mapred.JobClient:  map 100% reduce 100%
> 11/12/01 17:18:31 INFO mapred.JobClient: Task Id :
> attempt_201111302113_0034_r_000013_0, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
> 1 action: servers with issues: lp171.etri.re.kr:60020,
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1239)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1253)
>        at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:828)
>        at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:684)
>        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:669)
>        at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:127)
>        at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
>        at
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:514)
>        at
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>        at etri.qa.mapreduce.PostProcess$PostPro
> attempt_201111302113_0034_r_000013_0: 20111122
> 11/12/01 17:18:36 INFO mapred.JobClient: Task Id :
> attempt_201111302113_0034_r_000013_1, Status : FAILED
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
> 1 action: servers with issues: lp171.etri.re.kr:60020,
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1239)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1253)
>        at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:828)
>        at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:684)
>        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:669)
>        at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:127)
>        at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
>        at
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:514)
>        at
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>        at etri.qa.mapreduce.PostProcess$PostPro
> attempt_201111302113_0034_r_000013_1: 20111122
> 11/12/01 17:18:37 INFO mapred.JobClient:  map 100% reduce 95%
> 11/12/01 17:18:44 INFO mapred.JobClient:  map 100% reduce 96%
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB