Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Regarding MapReduce Input Format


Copy link to this message
-
Regarding MapReduce Input Format
Hi,

I came across the below question and I feel 'D' is the correct answer but
in some site it is mentioned that 'B' is the correct answer... Can you
please tell which is the right one with explanation pls...

In a MapReduce job, you want each of you input files processed by a single
map task. How do you
configure a MapReduce job so that a single map task processes each input
file regardless of how
many blocks the input file occupies?
A. Increase the parameter that controls minimum split size in the job
configuration.
B. Write a custom MapRunner that iterates over all key-value pairs in the
entire file.
C. Set the number of mappers equal to the number of input files you want to
process.
D. Write a custom FileInputFormat and override the method isSplittable to
always return false.

regards,
Rams
+
Harsh J 2012-11-07, 16:38
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB