Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Sqoop >> mail # user >> Sqoop Export to Teradata (BatchUpdateException)


+
Dipesh Kumar Singh 2013-12-02, 18:17
+
Abraham Elmahrek 2013-12-07, 19:11
+
Dipesh Kumar Singh 2013-12-11, 07:41
Copy link to this message
-
Re: Sqoop Export to Teradata (BatchUpdateException)
Thanks for reporting back!
On Tue, Dec 10, 2013 at 11:41 PM, Dipesh Kumar Singh
<[EMAIL PROTECTED]>wrote:

> Hi Abraham,
>
> I resolved this exception, the root cause was the restriction on length of
> exact table name which was exceeding 24 character limit (say my original
> table : dd_TXYZ_ABCDEFG_PL_QWER ). [In above post, I have masked it in the
> stack trace ].
>
> I tried this with Teradata connector.
>
> Thanks for your help!
>
> --
> D/
>
>
> On Sun, Dec 8, 2013 at 12:41 AM, Abraham Elmahrek <[EMAIL PROTECTED]>wrote:
>
>> Hey Dipesh,
>>
>> A few questions for you... what version of Sqoop are you using? It looks
>> like you're using the Cloudera Teradata connector. What version are you
>> using? Could you provide your Sqoop command?
>>
>> If you're having difficulty with the Teradata connector, the generic JDBC
>> connector can be used instead. You should be able to tell Sqoop to use the
>> generic JDBC driver by append the "driver" option in conjunction with the
>> "batch" option (Teradata needs the batch option) to the end of your
>> command. IE: "--driver com.teradata.jdbc.TeraDriver --batch". NOTE: The
>> generic JDBC connector will not be as fast as the Teradata connector.
>>
>> Hope this helps,
>> -Abe
>>
>>
>> On Mon, Dec 2, 2013 at 10:17 AM, Dipesh Kumar Singh <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Hello Users,
>>>
>>>
>>> It is not evident to me what might have went wrong which is leading to
>>> this exception --
>>> "[Error 1154] [SQLState HY000] A failure occurred while inserting the
>>> batch of rows destined for database table "DW1_DS_WORK"."dd_TP_BAL_REPT4".
>>> Details of the failure can be found in the exception chain that is
>>> accessible with getNextException.
>>>
>>> "
>>> Can anyone help me on resolving this exception. Below is the complete
>>> stack trace.
>>>
>>>
>>> inserting the batch of rows destined for database table
>>> "DW1_DS_WORK"."dd_TP_BAL_REPT1". Details of the failure can be found in the
>>> exception chain that is accessible with getNextException.
>>>         at
>>> com.cloudera.sqoop.teradata.exports.TeradataRecordWriter.write(TeradataRecordWriter.java:133)
>>>         at
>>> com.cloudera.sqoop.teradata.exports.TeradataRecordWriter.write(TeradataRecordWriter.java:27)
>>>         at
>>> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:531)
>>>         at
>>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>>>         at
>>> com.cloudera.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:82)
>>>         at
>>> com.cloudera.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:40)
>>>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>>>         at
>>> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper
>>> 13/12/02 09:34:25 INFO mapred.JobClient: Task Id :
>>> attempt_201311141752_15941_m_000003_0, Status : FAILED
>>> java.io.IOException: java.sql.BatchUpdateException: [Teradata JDBC
>>> Driver] [TeraJDBC 13.10.00.35] [Error 1154] [SQLState HY000] A failure
>>> occurred while inserting the batch of rows destined for database table
>>> "DW1_DS_WORK"."dd_TP_BAL_REPT3". Details of the failure can be found in the
>>> exception chain that is accessible with getNextException.
>>>         at
>>> com.cloudera.sqoop.teradata.exports.TeradataRecordWriter.write(TeradataRecordWriter.java:133)
>>>         at
>>> com.cloudera.sqoop.teradata.exports.TeradataRecordWriter.write(TeradataRecordWriter.java:27)
>>>         at
>>> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:531)
>>>         at
>>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>>>         at
>>> com.cloudera.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:82)
>>>         at
>>> com.cloudera.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:40)
>>>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>>>         at
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB