Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> regionserver died when using Put to insert data


Copy link to this message
-
Re: regionserver died when using Put to insert data
hi,jia
If you want to load 77G data, you can consider the solution as below:
1 create table with splitting region beforehand,
2 Write a MR program to generate HFile according to table's split
region.(HFileOutputFormat, refer to import bulkload)
3 Incrementally load into region.
On Aug 14, 2013 6:50 PM, "Jean-Marc Spaggiari" <[EMAIL PROTECTED]>
wrote:

> Hi Jia,
>
> That's all how hbase works ;)
>
> When regions are bigger than the configured value, hbase will split them.
> Default is 10GB but you can configure that per table.
>
> So with 77GB you should have at least 8 regions.  For performances, don't
> forget to pre-split before you load...
>
> JM
> Le 2013-08-13 22:16, <[EMAIL PROTECTED]> a écrit :
>
> > Hi jean-marc,****
> >
> > ** **
> >
> > The hdfs is running all the time ; I guess hbase occurs splitting during
> > putting large data , and the original hfile is splitted into new
> hfiles?**
> > **
> >
> > Is that possible?****
> >
> > ** **
> >
> > ** **
> >
> > ** **
> >
> > Hi Jia,****
> >
> > ** **
> >
> > How is you HDFS running?****
> >
> > ** **
> >
> > "Caused by: org.apache.hadoop.ipc.****
> >
> > RemoteException(java.io.IOException): File****
> >
> >
> >
> /apps/hbase/data/lbc_zte_1_imei_index/4469e6b0500bf3f5ed0ac1247d249537/.tmp/e7bb489662344b26bc6de1e72c122eec
> > ****
> >
> > could only be replicated to 0 nodes instead of minReplication (=1).
>  There
> > ****
> >
> > are 3 datanode(s) running and no node(s) are excluded in this
> operation.**
> > **
> >
> > "****
> >
> > ** **
> >
> > Sound like there is some issues on the datanode. Have you checked its
> logs?
> > ****
> >
> > ** **
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB