Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Sqoop >> mail # user >> Issue in Sqoop Export from HDFS(Hbase data) to MySql


+
Vaibhav V Nirkhe 2013-10-09, 12:49
Copy link to this message
-
Re: Issue in Sqoop Export from HDFS(Hbase data) to MySql
User,

Hbase exporting is currently not supported in Sqoop.

What you can do is export the Hbase data into HDFS first, then use Sqoop to
transfer it into MySQL.

-Abe
On Wed, Oct 9, 2013 at 5:49 AM, Vaibhav V Nirkhe <
[EMAIL PROTECTED]> wrote:

>  Hi ,
>         I am using Sqoop 1.4.3 on Hadoop 1.2.1 and trying to export HBase
> data placed in HDFS to MySQL , however I am getting following
> ClassCastException :-
>
> I am using following command :-
>
> sqoop export --connect jdbc:mysql://localhost:3306/OMS --username root -P
> --table CNT_REPORT_DATA --columns CUSTOMER_ID,MONTH  --export-dir
> /user/hduser/esr_data --verbose -m 1
>
> I guess Sqoop is trying to fetch the record by its key and not able to
> cast the key :-
>
> java.lang.ClassCastException:
> org.apache.hadoop.hbase.io.ImmutableBytesWritable cannot be cast to
> org.apache.hadoop.io.LongWritable
>     at
> org.apache.sqoop.mapreduce.CombineShimRecordReader.getCurrentKey(CombineShimRecordReader.java:95)
>     at
> org.apache.sqoop.mapreduce.CombineShimRecordReader.getCurrentKey(CombineShimRecordReader.java:38)
>     at
> org.apache.sqoop.mapreduce.CombineFileRecordReader.getCurrentKey(CombineFileRecordReader.java:79)
>     at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getCurrentKey(MapTask.java:503)
>     at
> org.apache.hadoop.mapreduce.MapContext.getCurrentKey(MapContext.java:57)
>     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>     at
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
>     at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.mapred.Child.main(Child.java:249)
>
>
> I don't understand why the key is always expected to be LongWritable here
> ?  Please suggest asap .
>
>
>
> Thanks in advance,
>
>
>
> ------------------------------
>
>
>
>
>
>
> NOTE: This message may contain information that is confidential,
> proprietary, privileged or otherwise protected by law. The message is
> intended solely for the named addressee. If received in error, please
> destroy and notify the sender. Any use of this email is prohibited when
> received in error. Impetus does not represent, warrant and/or guarantee,
> that the integrity of this communication has been maintained nor that the
> communication is free of errors, virus, interception or interference.
>
+
Vaibhav V Nirkhe 2013-10-09, 17:12
+
Abraham Elmahrek 2013-10-09, 17:29
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB