All examples that I found executes mapreduce job on a single file but in my
situation I have more than one.
Suppose I have such folder on HDFS which contains some files:
how can I execute hadoop mapreduce on file1.txt , file2.txt and file3.txt?
Is it possible to provide to hadoop job folder as parameter and all files
will be produced by mapreduce job?
Thanks In Advance