Search Hadoop and all its sub project:

Switch to Threaded View
Subject: Re: sqooping into S3
Sorry for the delay.  I have not really checked with S3 as the default
warehouse directory so it may be a Sqoop or Hive issue.  That said, if you
want Sqoop to create the hive table, you need to use --hive-import option.

Or instead of going through an initial import to S3 and then doing a "Load"
to hive (which actually does a rename), you can try to use the hcatalog
import option.   That option will create the hive table using the HCatalog
interfaces and you may have better success in creating the hive table and
pushing the data.   HCat import and export is available as part of Sqoop


On Tue, Feb 4, 2014 at 4:58 PM, Imran Akbar <[EMAIL PROTECTED]> wrote:
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB