Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Sqoop >> mail # user >> Sqoop Export from Java Program - HDFS Path Assignment issue


+
Vijay 2013-06-11, 05:09
Copy link to this message
-
Re: Sqoop Export from Java Program - HDFS Path Assignment issue
Hi,

Do you have hadoop/conf/*in your path?

- Alex

On Jun 11, 2013, at 7:09 AM, Vijay <[EMAIL PROTECTED]> wrote:

> I have written a java program to extract data from HDFS file system to MySQL database.When I run the program it tries to read it from local file system instead of assigned HDFS file system. Can any one please analyze the below code and let me know the issue?
>
> My Configurations
>
> Running Single Node Server in Red Hat Linux 5
> Hadoop 1.2.0
> Sqoop 1.4.3
> Running from Eclipse.
> Program Follows :
>
>      package com.archival.da;
>
>      import java.io.IOException;
>      import javax.servlet.ServletException;
>      import javax.servlet.http.HttpServlet;
>      import javax.servlet.http.HttpServletRequest;
>      import javax.servlet.http.HttpServletResponse;
>      import java.sql.Connection;
>      import java.sql.PreparedStatement;
>      import java.sql.ResultSet;
>      import org.apache.hadoop.conf.*;
>      import org.apache.hadoop.fs.*;
>
>      import com.cloudera.sqoop.*;
>      import com.cloudera.sqoop.tool.ExportTool;
>
>      @SuppressWarnings("serial")
>      public class DataExport extends HttpServlet {
>
>
>
>  @SuppressWarnings("deprecation")
> public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
>      response.setContentType("text/html");
>
>
>      String run_id = request.getParameter("run_id");
>      Connection con=GetCon.getCon();
>     PreparedStatement ps1;
>     try {
>
>         String driver = "com.mysql.jdbc.Driver";
>         Class.forName(driver).newInstance();
>
>             // Get running process Run ID to track and update status
>             ps1=con.prepareStatement("SELECT POLICY.SRC_TABLE,POLICY.SRC_DB,CON.SERVER,CON.PORT,RT.RUN_DATE,CON.USER,CON.PWD FROM POLICY JOIN CONNECTION AS CON ON POLICY.C_ID=CON.C_ID JOIN RUN_TRACKER AS RT ON POLICY.ID=RT.POLICY_ID AND RUN_ID=?");
>             ps1.setString(1,run_id);
>             ResultSet rs1=ps1.executeQuery();
>             rs1.next();
>             String tgtTable=rs1.getString(1);
>             String runDate=rs1.getDate(5).toString();
>             String newRunDate=runDate.replace("-", "_");
>             String restore_dir=tgtTable+"_"+newRunDate;
>             String ServerNm = "jdbc:mysql://"+rs1.getString(3)+":"+rs1.getString(4)+"/"+rs1.getString(2);
>             String ConUser=rs1.getString(6);
>             String ConPass=rs1.getString(7);
>
>
>             Configuration config = new Configuration();
>             config.addResource(new Path("/ms/hadoop-1.2.0/conf/core-site.xml"));
>             config.addResource(new Path("/ms/hadoop-1.2.0/conf/hdfs-site.xml"));
>             FileSystem dfs = FileSystem.get(config);
>             String exportDir=dfs.getWorkingDirectory()+"/"+restore_dir;
>             System.out.println(exportDir);
>             Path path = new Path(exportDir);
>             SqoopOptions options=new SqoopOptions();
>             options.setDriverClassName(driver);
>             options.setHadoopMapRedHome("/ms/hadoop-1.2.0");
>             options.setConnectString(ServerNm);
>             options.setUsername(ConUser);
>             options.setPassword(ConPass);
>             options.setExportDir(exportDir);
>             options.setTableName(tgtTable);
>             options.setInputFieldsTerminatedBy(',');
>             options.setNumMappers(1);
>
>             int status=new ExportTool().run(options);
>             System.out.println(status);
>             if(status==0){
>
>             dfs.delete(path,true);
>
>
>             }
>
>
>
>               response.getWriter().write("Restore Process Completed");
>
>               con.close();          
>         } catch (Exception e){
>             e.printStackTrace();
>         }
>
> }
>
>     }
> Console Error Message : I also printed to ensure the exportDir is assigned with hdfs path
>
> hdfs://localhost:9000/user/root/city_2013_06_10
> 13/06/09 21:00:22 WARN sqoop.ConnFactory: $SQOOP_CONF_DIR has not been set in the     environment. Cannot check for additional configuration.

Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB