Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> Sqoop Import into Hive


Copy link to this message
-
Sqoop Import into Hive
Hi,
I'm trying to import from my oracle db and want to insert in my hive. For that i used the below script. ./sqoop-import --connect jdbc:oracle:thin:@10.198.100.100:1521/cp.TNSNAME --username scott--password tiger --table=EMPLOYEE  --hive-table Employee --create-hive-table --hive-import --hive-home /path to hive home/But i am getting the below error. 13/06/24 09:06:10 ERROR security.UserGroupInformation: PriviledgedActionException as:root cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory EMPLOYEE already exists13/06/24 09:06:10 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory EMPLOYEE already exists        at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:889)        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:396)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)        at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)        at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:173)        at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:151)        at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:221)        at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:545)        at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:380)        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:403)        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
I am unable to find out the directory in my hdfs.  I tried to execute the dfs command but there is no directory like that. Pls help me.

Thanks,Manickam P    
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB