Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> Import data to HDFS using Sqoop2


Copy link to this message
-
Re: Import data to HDFS using Sqoop2
Hi Yanting,

It seems like you did not specify the 'partition column' for the job.
Generally, the primary key of  the table is a good choice for 'parition
column'.

You could use 'update job -jid 3' to update the job configuration.

Mengwei
On Wed, Sep 4, 2013 at 9:37 PM, Yanting Chen <[EMAIL PROTECTED]> wrote:

> Abraham,
>
> Thanks for you answer. I review my database.
> I think the database name is invoice and the schema name is public, just
> like the picture below.
> http://imgur.com/ns0iNLi
>
> So, I change the schema name with a new value, "public"
> Then, I run this job and get a different error.
>
> Status: FAILURE_ON_SUBMIT
> Creation date: 2013-09-05 09:30:44 CST
> Last update date: 2013-09-05 09:30:44 CST
> Exception: org.apache.sqoop.common.SqoopException:
> GENERIC_JDBC_CONNECTOR_0011:The type is not supported - 12
> Stack trace: org.apache.sqoop.common.SqoopException:
> GENERIC_JDBC_CONNECTOR_0011:The type is not supported - 12
> at
> org.apache.sqoop.connector.jdbc.GenericJdbcImportPartitioner.getPartitions(GenericJdbcImportPartitioner.java:87)
>  at
> org.apache.sqoop.connector.jdbc.GenericJdbcImportPartitioner.getPartitions(GenericJdbcImportPartitioner.java:32)
> at
> org.apache.sqoop.job.mr.SqoopInputFormat.getSplits(SqoopInputFormat.java:71)
>  at
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:452)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:469)
>  at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:366)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1269)
>  at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1266)
> at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.mapreduce.Job.submit(Job.java:1266)
> at
> org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submit(MapreduceSubmissionEngine.java:265)
>  at
> org.apache.sqoop.framework.FrameworkManager.submit(FrameworkManager.java:480)
> at
> org.apache.sqoop.handler.SubmissionRequestHandler.submissionSubmit(SubmissionRequestHandler.java:112)
>  at
> org.apache.sqoop.handler.SubmissionRequestHandler.handleActionEvent(SubmissionRequestHandler.java:98)
> at
> org.apache.sqoop.handler.SubmissionRequestHandler.handleEvent(SubmissionRequestHandler.java:68)
>  at
> org.apache.sqoop.server.v1.SubmissionServlet.handlePostRequest(SubmissionServlet.java:44)
> at
> org.apache.sqoop.server.SqoopProtocolServlet.doPost(SqoopProtocolServlet.java:63)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
>  at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>  at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>  at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>  at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>  at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
> at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602)
>  at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:724)
>
> Also, I try to remove the schema name, but I get the same error as above.
>
> On Thu, Sep 5, 2013 at 6:11 AM, Abraham Elmahrek <[EMAIL PROTECTED]> wrote:
>
> Yanting,
>>
>> I'm sorry, I'm a bit confused. The database you are using here is called
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB