Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Re: number of mapper tasks

Copy link to this message
Re: number of mapper tasks
Vinod Kumar Vavilapalli 2013-01-29, 20:08
Tried looking at your code, it's a bit involved. Instead of trying to run
the job, try unit-testing your input format. Test for getSplits(), whatever
number of splits that method returns, that will be the number of mappers
that will run.

You can also use LocalJobRunner also for this - set mapred.job.tracker to
local and run your job locally on your machine instead of trying on a


On Tue, Jan 29, 2013 at 4:53 AM, Marcelo Elias Del Valle <[EMAIL PROTECTED]
> wrote:

> Hello,
>     I have been able to make this work. I don't know why, but when but
> input file is zipped (read as a input stream) it creates only 1 mapper.
> However, when it's not zipped, it creates more mappers (running 3 instances
> it created 4 mappers and running 5 instances, it created 8 mappers).
>     I really would like to know why this happens and even with this number
> of mappers, I would like to know why more mappers aren't created. I was
> reading part of the book "Hadoop - The definitive guide" (
> https://www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter-7/input-formats)
> which says:
> "The JobClient calls the getSplits() method, passing the desired number
> of map tasks as the numSplits argument. This number is treated as a hint,
> as InputFormat implementations are free to return a different number of
> splits to the number specified in numSplits. Having calculated the
> splits, the client sends them to the jobtracker, which uses their storage
> locations to schedule map tasks to process them on the tasktrackers. ..."
>      I am not sure on how to get more info.
>      Would you recommend me to try to find the answer on the book? Or
> should I read hadoop source code directly?
> Best regards,
> Marcelo.
> 2013/1/29 Marcelo Elias Del Valle <[EMAIL PROTECTED]>
>> I implemented my custom input format. Here is how I used it:
>> https://github.com/mvallebr/CSVInputFormat/blob/master/src/test/java/org/apache/hadoop/mapreduce/lib/input/test/CSVTestRunner.java
>> As you can see, I do:
>> importerJob.setInputFormatClass(CSVNLineInputFormat.class);
>> And here is the Input format and the linereader:
>> https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVNLineInputFormat.java
>> https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVLineRecordReader.java
>> In this input format, I completely ignore these other parameters and get
>> the splits by the number of lines. The amount of lines per map can be
>> controlled by the same parameter used in NLineInputFormat:
>> public static final String LINES_PER_MAP >> "mapreduce.input.lineinputformat.linespermap";
>> However, it has really no effect on the number of maps.
>> 2013/1/29 Vinod Kumar Vavilapalli <[EMAIL PROTECTED]>
>>> Regarding your original question, you can use the min and max split
>>> settings to control the number of maps:
>>> http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.html. See #setMinInputSplitSize and #setMaxInputSplitSize. Or
>>> use mapred.min.split.size directly.
>>> W.r.t your custom inputformat, are you sure you job is using this
>>> InputFormat and not the default one?
>>>  HTH,
>>> +Vinod Kumar Vavilapalli
>>> Hortonworks Inc.
>>> http://hortonworks.com/
>>> On Jan 28, 2013, at 12:56 PM, Marcelo Elias Del Valle wrote:
>>> Just to complement the last question, I have implemented the getSplits
>>> method in my input format:
>>> https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVNLineInputFormat.java
>>> However, it still doesn't create more than 2 map tasks. Is there
>>> something I could do about it to assure more map tasks are created?
>>> Thanks
>>> Marcelo.
>>> 2013/1/28 Marcelo Elias Del Valle <[EMAIL PROTECTED]>
>>>> Sorry for asking too many questions, but the answers are really

Hortonworks Inc.