You'll need to make significant changes MapTask.java which won't make it back to the mainline.
Why? We had this before and quickly ran out of inodes on the local-disk. Think of large jobs with 10,000 maps * 1000 reduces -> that's 10M files.
On Aug 19, 2012, at 8:57 AM, Pavan Kulkarni wrote:
> Ohh ,Thanks a lot Harsh. Exactly what I was looking for.
> I wanted to create different file.out's for different reducers. Something
> file.out.1 for reducer 1, file.out.2 for reducer etc. Is it possible to do
> this in the MapReduce program or I need to tweak some Hadoop source files
> for that? Thanks.
> On Sun, Aug 19, 2012 at 7:02 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>> Hey Pavan,
>> Yes you've got it almost right on how file.out is served to each
>> reducer. See the code at
>> (Method under L502:L565 that sends data for a specific
>> reduce/partition ID (integer)).
>> On Sun, Aug 19, 2012 at 9:05 AM, Pavan Kulkarni <[EMAIL PROTECTED]>
>>> I was trying to understand how exactly the reducers find out how to
>>> the data of its own partition from Map nodes.
>>> During the executions of MapReduce, I see that *file.out* is created on
>>> nodes, so my question is how does a reducer
>>> know what part of file.out to fetch? Is the *file.out.index* play any
>>> Any help is appreciated .Thanks
>>> --With Regards
>>> Pavan Kulkarni
>> Harsh J
> --With Regards
> Pavan Kulkarni
Arun C. Murthy