Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> sqooping into S3


Copy link to this message
-
Re: sqooping into S3
Yes Imran,
I would try to define the fs.defaultFS for the S3 in core-site.xml and see if it will help Sqoop to accept the S3 path.

Jarcec

On Tue, Feb 04, 2014 at 08:08:17AM -0800, Imran Akbar wrote:
> thanks Jarek,
>    How would I do that?  Do I need to set fs.defaultFS in core-site.xml, or
> is it something else?  Is there a document somewhere which describes this?
>
> yours,
> imran
>
>
> On Mon, Feb 3, 2014 at 9:31 PM, Jarek Jarcec Cecho <[EMAIL PROTECTED]>wrote:
>
> > Would you mind trying to set the S3 filesystem as the default one for
> > Sqoop?
> >
> > Jarcec
> >
> > On Mon, Feb 03, 2014 at 10:25:50AM -0800, Imran Akbar wrote:
> > > Hi,
> > >     I've been able to sqoop from MySQL into HDFS, but I was wondering if
> > it
> > > was possible to send the data directly to S3 instead.  I've read some
> > posts
> > > on this forum and others that indicate that it's not possible to do this
> > -
> > > could someone confirm?
> > >
> > > I tried to get it to work by setting:
> > > --warehouse-dir s3n://MYS3APIKEY:MYS3SECRETKEY@bucketname/folder/
> > > or
> > > --target-dir s3n://MYS3APIKEY:MYS3SECRETKEY@bucketname/folder/
> > >
> > > options but I get the error:
> > > ERROR tool.ImportTool: Imported Failed: This file system object (hdfs://
> > > 10.168.22.133:9000) does not support access to the request path
> > > 's3n://****:****@iakbar.emr/new-hive-output/_logs' You possibly called
> > > FileSystem.get(conf) when you should have called FileSystem.get(uri,
> > conf)
> > > to obtain a file system supporting your path
> > >
> > > If it's not possible to do this, should I just import to HDFS and then
> > > output to S3?  Is there an easy way to do this without having to specify
> > > the schema of the whole table again?
> > >
> > > thanks,
> > > imran
> >

 -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iQIcBAEBAgAGBQJS8RHJAAoJEEk45sAabwqGIjoP/28iqPymhenpf0jIGjLScuh0
IReoCkg/T19T2YDZVFqJD/7drrjSEnWv5Tatt08/dIxTtz1I7Cpp4m3+Zol9t2gA
nKDZHbCgCQn+ly+89IA96Y3D3Sjy6ys5OWb3b8BPILoJzP+JhXPrG/ZWGxNlp0xZ
XXGRKhJuO0aEYtVqUhRwZpeKfhe6qIADEJ1z8EXy3RpFf9oMhucGcCMw66WebW4v
D3KsSZBwmbX9H/PHwD21bWrMFufty95t3XtPbgwnvSf9pmtKYI4bLdEKfxYTDhy/
lIWP06TH6CtDZkFaEsoKeTRh90DsLck72dyYuo/6NqRQxdp+v3t6hxC43hJtrt4e
ZGvXr7HpVnOcxxkq+Jdon+ZZI84GSAETsgBRKKlX3YCk8fzMiP1Bmk6s7UX+F3ZY
15H0NwMynouRBFMPR5fa0TWRXw7ryx9hbHt+jbLEsD2T3PoFP6LqeBBqlRkYcaOT
ztD9F5MAuwX5O2hILDdCnhoA2lw8u/8O2C6Zo4Fs22RGSjlJvPHirJq1UpNwgWWY
MF95t/KOndcZf7Sr7BCd6fg4LJLXMH/+8Nsx5wetarhouJVLmiJ1PQ/7OIcZqiES
octrVnsQ9wxKR9BoXISzpQECtQP/L6stj5wUPCaf4cS5nYigX0XnZFqRXO0Ofy1w
Om3ahKw4SNfdxakAIpb8
=Ndb1
-----END PGP SIGNATURE-----

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB