One reducer would definitely emit one output file. If you are looking at just one file as your final result in lfs, Then once you have the MR job done use hadoop fs -getmerge .
Sent from BlackBerry® on Airtel
From: Masoud <[EMAIL PROTECTED]>
Date: Tue, 20 Mar 2012 19:49:01
To: <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Increasing number of Reducers
Thanks for reply,
as you know in this way we will have n final result too,
is this any way to increase the number of Reducer for fast computation
but have only one final result?
On 03/20/2012 07:02 PM, [EMAIL PROTECTED] wrote:
> Hi Mausoud
> Set -D mapred.reduce.tasks=n; ie to any higher value.
> Sent from BlackBerry® on Airtel
> -----Original Message-----
> From: Masoud<[EMAIL PROTECTED]>
> Date: Tue, 20 Mar 2012 17:52:58
> To:<[EMAIL PROTECTED]>
> Reply-To: [EMAIL PROTECTED]
> Subject: Increasing number of Reducers
> Hi all,
> we have a cluster with 32 machines and running C# version of wordcount
> program on it.
> Map phase is done by different machines but Reduce is only done by one
> machine. Our data is around 7G text data and by using one machine for
> Reduce phase this job is doing so slowly.
> Is there any way to increase number of reducers?