Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Regarding MapReduce Input Format

Copy link to this message
Regarding MapReduce Input Format

I came across the below question and I feel 'D' is the correct answer but
in some site it is mentioned that 'B' is the correct answer... Can you
please tell which is the right one with explanation pls...

In a MapReduce job, you want each of you input files processed by a single
map task. How do you
configure a MapReduce job so that a single map task processes each input
file regardless of how
many blocks the input file occupies?
A. Increase the parameter that controls minimum split size in the job
B. Write a custom MapRunner that iterates over all key-value pairs in the
entire file.
C. Set the number of mappers equal to the number of input files you want to
D. Write a custom FileInputFormat and override the method isSplittable to
always return false.

Harsh J 2012-11-07, 16:38