Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop, mail # user - Sqoop2 : The type is not supported - BigDecimal


Copy link to this message
-
Re: Sqoop2 : The type is not supported - BigDecimal
Amit 2013-05-21, 11:24
sqoop:000> show job --jid 8
1 job(s) to show:
Job with id 8 and name First job (Created 5/6/13 3:10 PM, Updated 5/6/13
8:21 PM)
Using Connection id 5 and Connector id 1
  Database configuration
    Schema name:
    Table name: business_entities
    Table SQL statement:
    Table column names: Company
    Partition column name: ID
    Boundary query:
  Output configuration
    Storage type: HDFS
    Output format: TEXT_FILE
    Output directory: /landscape/MySQL
  Throttling resources
    Extractors: 1
    Loaders: 1

On Tue, May 21, 2013 at 3:17 PM, Jarek Jarcec Cecho <[EMAIL PROTECTED]>wrote:

> Hi sir,
> would you mind sharing output of the "show job" command and the mapreduce
> job configuration XML file?
>
> Jarcec
>
> On Tue, May 14, 2013 at 01:01:52PM +0530, Amit wrote:
> > None, I am using default extractors and loaders.
> >
> > --
> > Thanks,
> > Amit
> >
> >
> > On Mon, May 13, 2013 at 10:41 PM, Jarek Jarcec Cecho <[EMAIL PROTECTED]
> >wrote:
> >
> > > Hi Amit,
> > > how many extractors and loaders do you have configured in this job?
> > >
> > > Jarcec
> > >
> > > On Mon, May 06, 2013 at 03:31:43PM +0530, Amit wrote:
> > > > Hi,
> > > >
> > > > I am not able to import MySQL tables containing decimal datatype. Am
> I
> > > > doing anything wrong? Here is the sqoop log file -
> > > >
> > > > java.lang.Exception: org.apache.sqoop.common.SqoopException:
> > > > MAPRED_EXEC_0017:Error occurs during extractor run
> > > > at
> > >
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:400)
> > > > Caused by: org.apache.sqoop.common.SqoopException:
> MAPRED_EXEC_0017:Error
> > > > occurs during extractor run
> > > > at org.apache.sqoop.job.mr.SqoopMapper.run(SqoopMapper.java:94)
> > > >  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:725)
> > > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
> > > >  at
> > > >
> > >
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:232)
> > > >  at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> > > >  at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> > > > at
> > > >
> > >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > >  at
> > > >
> > >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > >  at java.lang.Thread.run(Thread.java:722)
> > > > Caused by: org.apache.sqoop.common.SqoopException:
> > > MAPRED_EXEC_0013:Cannot
> > > > write to the data writer
> > > >  at
> > > >
> > >
> org.apache.sqoop.job.mr.SqoopMapper$MapDataWriter.writeContent(SqoopMapper.java:142)
> > > >  at
> > > >
> > >
> org.apache.sqoop.job.mr.SqoopMapper$MapDataWriter.writeArrayRecord(SqoopMapper.java:124)
> > > > at
> > > >
> > >
> org.apache.sqoop.connector.jdbc.GenericJdbcImportExtractor.extract(GenericJdbcImportExtractor.java:60)
> > > >  at
> > > >
> > >
> org.apache.sqoop.connector.jdbc.GenericJdbcImportExtractor.extract(GenericJdbcImportExtractor.java:31)
> > > >  at org.apache.sqoop.job.mr.SqoopMapper.run(SqoopMapper.java:89)
> > > > ... 9 more
> > > > Caused by: java.io.IOException:
> org.apache.sqoop.common.SqoopException:
> > > > MAPRED_EXEC_0012:The type is not supported - java.math.BigDecimal
> > > >  at org.apache.sqoop.job.io.Data.writeArray(Data.java:309)
> > > > at org.apache.sqoop.job.io.Data.write(Data.java:171)
> > > >  at
> > > >
> > >
> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:100)
> > > >  at
> > > >
> > >
> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:84)
> > > >  at
> > > >
> > >
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1075)
> > > >  at
> > > >
> > >
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:655)
> > > > at
> > > >
> > >
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)

Thanks,
Mit