Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> sqooping into S3


Copy link to this message
-
Re: sqooping into S3
If you see the error, you can see that FS object being referenced is an
HDFS location which is not valid as you have an S3 filesystem as source of
data.

I dont know what your intention is.   You are saying hive import from MYSQL
to S3.    Do you mean Sqoop import?  You just want the files to land on S3?
 Then you don't need the --hive-import and --hive-overwrite options.

To do hive import with hive from S3 file, you probably have to make the
warehouse dir to be on S3.

You can also create an external table in Hive after the data lands on S3

Venkat
On Tue, Feb 4, 2014 at 1:50 PM, Imran Akbar <[EMAIL PROTECTED]> wrote:
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB