Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> about HBase export


+
Ally Gladstone 2013-03-04, 10:48
Copy link to this message
-
Re: about HBase export
Can you check region server logs to see if there was any error during export or import ?

Thanks

On Mar 4, 2013, at 2:48 AM, Ally Gladstone <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I'm using Hadoop and HBase cluster with some machines.
>  Hadoop 0.20.2(exactly, hadoop-0.20.2-cdh3u1)
>  HBase 0.90.3 (exactly, hbase-0.90.3-cdh3u1)
>
> I tried to export HBase table with next command,
> it seemed to work successfuly, but could not export table completely.
>
> command:
> /HADOOP_HOME/hadoop jar /HBASE_HOME/hbase-0.90.3-cdh3u1.jar export
> <tablename> <hdfs_outputdir>
>
> After exported, import exported-data into empty table, but there was
> many lost rows.
>
>
> When the table data size was small(exported-data size was a few GB),
> could export completly and fully import all row,
> but when table became huge(exported-data size became around 10 GB over),
> it seemed to export incompletly and couldn't import all row.
>
> Is there any upper limit of size for export, or necessary settings, any
> bugs ??
>
> thanks.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB