Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> Re: Sqoop from/to Oracle

Copy link to this message
RE: Sqoop from/to Oracle
Ok, tried > sqoop import  --hive-import -m8 --hive-table schema.table --target-dir schema.table --table schema.table --connect jdbc:oracle:thin:@xxx.xxx.xxx:0000/schema --username someuser --password somepw --split-by id_column -hive-drop-import-delims

I got the same NumberFormatException as reported in previous email. We're on 1.4.3 so I can't try the HCatalog option without a Sqoop upgrade. Might try an upgrade if I can't get this fixed in the near future so stay tuned.

Separately, I tried to scale back the import/export to a single field to see if I could isolate the issue further. To that end, I imported ten records of the ID column using:

sqoop import --hive-import -m8 --hive-table schema.table --target-dir schema.table --query "SELECT ID_COLUMN FROM (SELECT * FROM SCHEMA.TABLE ORDER BY ID_COLUMN) WHERE ROWNUM<=10 AND \$CONDITIONS" --hive-drop-import-delims  --connect jdbc:oracle:thin:@xxx.xxx.xxx:0000/schema --username user --password pw --split-by ID_COLUMN

I attempted to export using:

sqoop export  -m8 --table schema.table --export-dir /apps/hive/warehouse/schema/tabledir --connect jdbc:oracle:thin:@xxx.xxx.xxx.com:0000/schema --username username --password pw

Which caused:

               at java.util.AbstractList$Itr.next(AbstractList.java:350)
               at TABLEINFO.__loadFromFields(TABLEINFO.java:827)
               at TABLEINFO.parse(TABLEINFO.java:776)
               at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
               at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
               at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
               at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
               at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
               at org.apache.hadoop.mapred.MapTask.run(MapTask.java:363)
               at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
               at java.security.AccessController.doPrivileged(Native Method)
               at javax.security.auth.Subject.doAs(Subject.java:396)
               at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
               at org.apache.hadoop.mapred.Child.main(Child.java:249)

The records imported were literally just the number 1-10 in a single column.

From: Venkat Ranganathan [mailto:[EMAIL PROTECTED]]
Sent: Friday, November 01, 2013 3:50 AM
Subject: Re: Sqoop from/to Oracle


You may want to add the option to remove the delimiters or replace the delimiters.   That is one option.    The options are

Please see http://sqoop.apache.org/docs/1.4.4/SqoopUserGuide.html

Another option would be to use Hcatalog import.
Or if you are using Sqoop 1.4.4, can you do the following

sqoop import  --hcatalog-table schema.table --table schema.table --connect jdbc:oracle:thin:@xxx.xxx.xxx:0000/schema --username someuser --password somepw --split-by id_column --create-hcatalog-table

This will create a new hive/hcat table that uses RC Fieformat by default (you can specify --storage-stanza to change it).    And you can export from hcatalog table to the oracle table also (just change import to export and remove the --create-hcatalog-table option).


On Thu, Oct 31, 2013 at 9:37 AM, Martin, Nick <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Jarcec - I've attached the task tracker logs for one of the failed map tasks below. I took Venkat's suggestion and imported the data into Hive using the following command:

sqoop import  --hive-import -m8 --hive-table schema.table --target-dir schema.table --table schema.table --connect jdbc:oracle:thin:@xxx.xxx.xxx:0000/schema --username someuser --password somepw --split-by id_column

Then tried to export from Hive to Oracle using (and it popped the error):

sqoop export  -m8 --table schema.table --export-dir /user/username/file --connect jdbc:oracle:thin:@xxx.xxx.xxx.com:0000/schema<http://jdbc:oracle:thin:@xxx.xxx.xxx.com:0000/schema> --username someuser --password somepw

I checked the rows Sqoop said had errors but didn't see any anomalies.

Task Tracker logs (I had to strip some of the values and replace with generic values, but left the data types/lengths the same):

syslog logs
2013-10-31 11:45:01,054 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-10-31 11:45:01,532 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-10-31 11:45:01,535 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6a0da90c<mailto:org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6a0da90c>
2013-10-31 11:45:01,671 INFO org.apache.hadoop.mapred.MapTask: Processing split: Paths:/user/UserName/table_name/part-m-00000:0+67108864,/user/UserName/table_name/part-m-00000:67108864+67108864
2013-10-31 11:45:15,162 INFO com.hadoop.compression.lzo.GPLNativeCodeLoader: Loaded native gpl library
2013-10-31 11:45:15,164 INFO com.hadoop.compression.lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev cf4e7cbf8ed0f0622504d008101c2729dc0c9ff3]
2013-10-31 11:45:15,167 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is available
2013-10-31 11:45:15,167 INFO org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library loaded
2013-10-31 11:45:15,197 ERROR org.apache.sqoop.mapreduce.TextExportMapper:
2013-10-31 11:45:15,197 ERROR org.apache.sqoop.mapreduce.TextExportMapper: Exception raised during data export
2013-10-31 11:45:15,197 ERROR org.apache.sqoop.mapreduce.TextExportMapper:
2013-10-31 11:45:15,197 ERROR org.apache.sqoop.mapreduce.TextExportMapper: Exception:
        at java.math.BigDecimal.<init>(BigDecimal.java:459)
        at java.math.BigDecimal.