Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> Fwd: sqoop incremental import fails- Violation of unique constraint SQOOP_SESSIONS_UNQ


Copy link to this message
-
Re: Fwd: sqoop incremental import fails- Violation of unique constraint SQOOP_SESSIONS_UNQ
Hi Suhas,
it seems that you've shared log from multiple sqoop executions. Would you mind providing log from one single Sqoop execution with parameter --verbose where this issue will occur?

Jarcec

On Tue, Oct 15, 2013 at 11:38:39AM -0700, Suhas Satish wrote:
> Attached.
>
> Cheers,
> Suhas.
>
>
> On Tue, Oct 15, 2013 at 10:08 AM, Jarek Jarcec Cecho <[EMAIL PROTECTED]>wrote:
>
> > Would you mind also sharing with us entire sqoop log generated with
> > parameter --verbose?
> >
> > Jarcec
> >
> > On Mon, Oct 14, 2013 at 09:04:39AM -0700, Suhas Satish wrote:
> > > This is the concrete issue in sqoop 1.4.3 - The following command fails
> > to
> > > do an incremental import and tries to do a full import instead.
> > >
> > > sqoop job --create signup_log --import --connect
> > > jdbc:mysql://mydb/u1 --table signup_log --username u1
> > > --password <password> --hive-import --hive-table signup_log
> > > --incremental append --check-column sid --last-value 3276 --direct
> > >
> > > Cheers,
> > > Suhas.
> > >
> > >
> > > On Mon, Oct 14, 2013 at 8:04 AM, Jarek Jarcec Cecho <[EMAIL PROTECTED]
> > >wrote:
> > >
> > > > Hi Suhas,
> > > > to my best knowledge, direct import from MySQL to Hive should be
> > working
> > > > in incremental mode. Is there any concrete issue that you are facing?
> > > >
> > > > Jarcec
> > > >
> > > > On Fri, Oct 11, 2013 at 02:46:10PM -0700, Suhas Satish wrote:
> > > > > I find the following bug related to using incremental import and
> >  direct
> > > > > mode together.
> > > > >
> > > > >   * [SQOOP-1078] - incremental import from database in direct mode
> > > > >   * [SQOOP-976] - Incorrect SQL when incremental criteria is text
> > column
> > > > >
> > > > > Is there any syntax/data you can share about the correct syntax and
> > > > > sequence of switches for using incremental import into hive using
> > direct
> > > > > mode import from mysql databases?
> > > > >
> > > > > Thanks,
> > > > > Suhas.
> > > > >
> > > > >
> > > > > ---------- Forwarded message ----------
> > > > > From: Suhas Satish <[EMAIL PROTECTED]>
> > > > > Date: Thu, Oct 10, 2013 at 1:15 PM
> > > > > Subject: Re: sqoop incremental import fails- Violation of unique
> > > > constraint
> > > > > SQOOP_SESSIONS_UNQ
> > > > > To: user <[EMAIL PROTECTED]>
> > > > >
> > > > >
> > > > > sqoop1.4.3
> > > > >
> > > > > sqoop job --create signup_log --import --connect
> > > > > jdbc:mysql://mydb/u1 --table signup_log --username u1
> > > > > --password <password> --hive-import --hive-table signup_log
> > > > > --incremental append --check-column sid --last-value 3276 --direct
> > > > >
> > > > > What I notice is that sqoop is not doing an incremental import but
> > trying
> > > > > to do a full import from the beginning and fails because the
> > map-reduce
> > > > > output directory already exists on hadoop file system. Is there a
> > bug in
> > > > > sqoop command parsing of incremental import when the above
> > parameters are
> > > > > used together?
> > > > >
> > > > > 13/10/04 00:10:36 ERROR tool.ImportTool: Encountered IOException
> > running
> > > > > import job: org.apache.hadoop.mapred.FileAlreadyExistsException:
> > Output
> > > > > directory signup_log already exists
> > > > > at
> > > > >
> > > >
> > org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:134)
> > > > >  at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:926)
> > > > > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:885)
> > > > >  at java.security.AccessController.doPrivileged(Native Method)
> > > > > at javax.security.auth.Subject.doAs(Subject.java:415)
> > > > >  at
> > > > >
> > > >
> > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> > > > > at
> > > >
> > org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:885)
> > > > >  at org.apache.hadoop.mapreduce.Job.submit(Job.java:573)
> > > > > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB