Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Sqoop >> mail # user >> sqoop 1.99.2 import error


+
Κωνσταντίνος Αρετάκης 2013-07-25, 16:58
+
Mengwei Ding 2013-07-25, 17:06
+
Κωνσταντίνος Αρετάκης 2013-07-25, 17:13
+
Mengwei Ding 2013-07-25, 17:20
+
Mengwei Ding 2013-07-25, 17:09
+
Κωνσταντίνος Αρετάκης 2013-07-25, 17:21
+
Mengwei Ding 2013-07-25, 17:40
Copy link to this message
-
Re: sqoop 1.99.2 import error
Yes it is text.

K.A

On 25 Ιουλ 2013, at 20:40, Mengwei Ding <[EMAIL PROTECTED]> wrote:

You are welcome, Sir.

By the way, for this new exception, could you provide more about the column
type? It's 'text', not large 'varchar', right? Thanks.

Best,
Mengwei
On Thu, Jul 25, 2013 at 10:21 AM, Κωνσταντίνος Αρετάκης <[EMAIL PROTECTED]
> wrote:

> Thanks a lot.
>
>
> I did provided partition column at first but I got the following error.
>
> The column was text type.
>
> Now I provided another column with int type and it worked fine.
> So i guess was that I specified partition column with not supported type
> Thanks again!!!
>
>
>
> Exception: org.apache.sqoop.common.SqoopException:
> GENERIC_JDBC_CONNECTOR_0011:The type is not supported - -1
> Stack trace: org.apache.sqoop.common.SqoopException:
> GENERIC_JDBC_CONNECTOR_0011:The type is not supported - -1
> at
> org.apache.sqoop.connector.jdbc.GenericJdbcImportPartitioner.getPartitions(GenericJdbcImportPartitioner.java:87)
>  at
> org.apache.sqoop.connector.jdbc.GenericJdbcImportPartitioner.getPartitions(GenericJdbcImportPartitioner.java:32)
> at
> org.apache.sqoop.job.mr.SqoopInputFormat.getSplits(SqoopInputFormat.java:71)
>  at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1024)
> at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1041)
>  at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>  at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>  at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
>  at
> org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submit(MapreduceSubmissionEngine.java:265)
> at
> org.apache.sqoop.framework.FrameworkManager.submit(FrameworkManager.java:467)
>  at
> org.apache.sqoop.handler.SubmissionRequestHandler.submissionSubmit(SubmissionRequestHandler.java:112)
> at
> org.apache.sqoop.handler.SubmissionRequestHandler.handleActionEvent(SubmissionRequestHandler.java:98)
>  at
> org.apache.sqoop.handler.SubmissionRequestHandler.handleEvent(SubmissionRequestHandler.java:68)
> at
> org.apache.sqoop.server.v1.SubmissionServlet.handlePostRequest(SubmissionServlet.java:44)
>  at
> org.apache.sqoop.server.SqoopProtocolServlet.doPost(SqoopProtocolServlet.java:63)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
>  at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>  at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>  at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>  at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>  at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>  at java.lang.Thread.run(Thread.java:724)
>
>
>
> On Thu, Jul 25, 2013 at 8:09 PM, Mengwei Ding <[EMAIL PROTECTED]>wrote:
>
>> Hi Konstantinos,
>>
>> Basically, from the exception itself, I could guess that you did not
>> specify column for partition when creating a job. But, still providing more
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB