Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Pig >> mail # user >> Copying files to Amazon S3 using Pig is slow


Copy link to this message
-
Copying files to Amazon S3 using Pig is slow
I want to copy 26,000 HDFS files generated by a pig script to Amazon S3.

I am using the copyToLocal command, but I noticed the copy throughput is
only one file per second - so it is going to take about 7 hours to copy all
the files.

The command I am using is: copyToLocal /tmp/files/ s3://my-bucket/

Does anyone have any ideas how I could speed this up?

Thanks,
James
+
Dan Young 2012-06-09, 15:13
+
Aniket Mokashi 2012-06-08, 22:24
+
Mohit Anchlia 2012-06-08, 23:24
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB