I want to copy 26,000 HDFS files generated by a pig script to Amazon S3.
I am using the copyToLocal command, but I noticed the copy throughput is
only one file per second - so it is going to take about 7 hours to copy all
The command I am using is: copyToLocal /tmp/files/ s3://my-bucket/
Does anyone have any ideas how I could speed this up?
Dan Young 2012-06-09, 15:13
Aniket Mokashi 2012-06-08, 22:24
Mohit Anchlia 2012-06-08, 23:24