I want to copy 26,000 HDFS files generated by a pig script to Amazon S3.
I am using the copyToLocal command, but I noticed the copy throughput is
only one file per second - so it is going to take about 7 hours to copy all
The command I am using is: copyToLocal /tmp/files/ s3://my-bucket/
Does anyone have any ideas how I could speed this up?