Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> Re: Export in free form And Mixed update/insert


Copy link to this message
-
Re: Export in free form And Mixed update/insert
Hi YouPeng,
Sqoop's Oracle connector do supports upsert mode [1]. Would you mind sharing with us entire Sqoop command line and log generated with parameter --verbose?

Jarcec

Links:
1: https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/manager/OracleManager.java#L401

On Wed, May 22, 2013 at 06:20:06PM +0800, YouPeng Yang wrote:
> Hi Jarek Jarcec Cecho
>
>   I also have found the tip. Thank you.
>
>   Here comes another question.
>
>   I find that  sqoop-1.4  support one  to update rows if they exist in the
> database already or insert rows if they do not exist yet  by using
> --update-key <col-name> --update-mode <mode>.
>
>   However, I got the error when I tried it:
>   ERROR tool.ExportTool: Error during export: Mixed update/insert is not
> supported against the target database yet
>
>  Note: the database is oracle.
>
>  1.Does it only supports the MySQL,but I found nothing that hint this in
> docs.
>
>  2.Is there any solutions that fullfil my issue to update rows if they
> exist in the database already or insert rows if they do not exist yet
>
>
>
> Thanks you.
>
>
> Regards.
>
>
> 2013/5/22 Jarek Jarcec Cecho <[EMAIL PROTECTED]>
>
> > Hi YouPeng,
> > Sqoop 1 do not supports custom insert query when exporting data from HDFS.
> > I think that in your use case you can use parameter --columns to specify
> > which columns and in what order are present on HDFS, for example:
> >
> >   sqoop ... --columns ID,TIMEID,COLA,COLB
> >
> > Jarcec
> >
> > On Wed, May 22, 2013 at 02:49:04PM +0800, YouPeng Yang wrote:
> > > Hi
> > >   I want to  export data on the HDFS to the oracle database with
> > > sqoop-1.4(sqoop-1.4.1-cdh4.1.2). However the columns betwean HDFS and
> > > Oracle table  are  not exactly same to each other.
> > >
> > >  For example,Data on HDFS:
> > > -------------------------------------------------------------
> > > | ID |      TIMEID          |   COLA  | COLB  |
> > > -------------------------------------------------------------
> > > | 6  |  201305221335    |    0        |  20      |
> > > -------------------------------------------------------------
> > >
> > > the Oracle table:
> > > ------------------------------------------------------------------------
> > > | ID |      TIMEID          |   COLC  | COLB   | COLA |
> > > ------------------------------------------------------------------------
> > > | 7  |  201305221335    |     kk      |  20      |  1       |
> > > ------------------------------------------------------------------------
> > > Note:Additional COLC and unsame order.
> > >
> > >
> > > I notice the sqoop export command:
> > > --export-dir   HDFS source path for the export
> > > --table  Table to populate.
> > > It seams not to export the data to oracle in free from just as Free-form
> > > Query Imports
> > > using the --query argument.
> > >
> > > Could I acheive that goal ?
> > >
> > > Thanks very much
> > >
> > > Regards
> >
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB