Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop, mail # user - Should the source database instance be in the same network with Hadoop cluster?


Copy link to this message
-
Re: Should the source database instance be in the same network with Hadoop cluster?
Cheolsoo Park 2012-05-17, 09:51
Hi Jason,

I found that during the process of import, the sqoop would submit a
> map-only Hadoop job to the cluster, which actually do the data transfer
> works. so I was wondering that should the source database be in the same
> network with the hadoop cluster?
Not necessarily in the same network, but the Hadoop jobs should be able to
access to the DB since they have to connect to the DB to import data via
JDBC.

*java.lang.RuntimeException: java.lang.RuntimeException:
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications
> link failure*
This exception is thrown by your JDBC driver. Can you confirm that you can
connect to the DB from the Hadoop cluster via JDBC? For example, you may
write a small program that connects to the DB via JDBC and run it in the
Hadoop cluster.

Hope this is helpful.

Thanks,
Cheolsoo

On Thu, May 17, 2012 at 2:15 AM, jason Yang <[EMAIL PROTECTED]>wrote:

> Hello,
>
> Recently, I'm trying to import MySQL data into Hive by using sqoop. but I
> have encountered the "Communications link failure" problem (the output is
> attached at the end of this mail)
>
> To solve this problem, I have checked the user guide of sqoop and do the
> solution mentioned in the troubleshooting section. however, It doesn't
> work. After that, I read some posts about the basic process of sqoop
> import, and I found that during the process of import, the sqoop
> would submit a map-only Hadoop job to the cluster, which actually do the
> data transfer works. so I was wondering that should the source database be
> in the same network with the hadoop cluster?
>
> Any suggestion would be appreciated.
>
>
> *--output--*
> *[root@node10 ~]# sqoop import --connect jdbc:mysql://
> 172.18.11.54:3306/hive --username root -P --table test --hive-import -m1*
> *Enter password: *
> *12/05/17 15:33:10 INFO tool.BaseSqoopTool: Using Hive-specific
> delimiters for output. You can override*
> *12/05/17 15:33:10 INFO tool.BaseSqoopTool: delimiters with
> --fields-terminated-by, etc.*
> *12/05/17 15:33:10 INFO tool.CodeGenTool: Beginning code generation*
> *12/05/17 15:33:10 INFO manager.MySQLManager: Executing SQL statement:
> SELECT t.* FROM `test` AS t LIMIT 1*
> *12/05/17 15:33:10 INFO manager.MySQLManager: Executing SQL statement:
> SELECT t.* FROM `test` AS t LIMIT 1*
> *12/05/17 15:33:11 INFO orm.CompilationManager: HADOOP_HOME is
> /srv/hadoop-0.20.2/bin/..*
> *12/05/17 15:33:11 INFO orm.CompilationManager: Found hadoop core jar at:
> /srv/hadoop-0.20.2/bin/../hadoop-0.20.2-core.jar*
> *12/05/17 15:33:12 ERROR orm.CompilationManager: Could not rename
> /tmp/sqoop-root/compile/6571e4a050817f31b3846917ae805e49/test.java to
> /root/./test.java*
> *12/05/17 15:33:12 INFO orm.CompilationManager: Writing jar file:
> /tmp/sqoop-root/compile/6571e4a050817f31b3846917ae805e49/test.jar*
> *12/05/17 15:33:12 WARN manager.MySQLManager: It looks like you are
> importing from mysql.*
> *12/05/17 15:33:12 WARN manager.MySQLManager: This transfer can be
> faster! Use the --direct*
> *12/05/17 15:33:12 WARN manager.MySQLManager: option to exercise a
> MySQL-specific fast path.*
> *12/05/17 15:33:12 INFO manager.MySQLManager: Setting zero DATETIME
> behavior to convertToNull (mysql)*
> *12/05/17 15:33:12 INFO mapreduce.ImportJobBase: Beginning import of test*
> *12/05/17 15:33:12 INFO manager.MySQLManager: Executing SQL statement:
> SELECT t.* FROM `test` AS t LIMIT 1*
> *12/05/17 15:33:13 INFO mapred.JobClient: Running job:
> job_201205151001_0019*
> *12/05/17 15:33:14 INFO mapred.JobClient:  map 0% reduce 0%*
> *12/05/17 15:33:23 INFO mapred.JobClient: Task Id :
> attempt_201205151001_0019_m_000000_0, Status : FAILED*
> *java.lang.RuntimeException: java.lang.RuntimeException:
> com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications
> link failure*
> *
> *
> *The last packet sent successfully to the server was 0 milliseconds ago.
> The driver has not received any packets from the server.*
> *        at
> com.cloudera.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:164)