You can specify the number of reducers explicitly using -D
hadoop jar wordcount.jar com.wc.WordCount –D mapred.reduce.tasks=n /input
Currenty your word count is triggering just 1 reducer because the defaukt
value of mapred.reduce.tasks woulld be set as 1 in your configuration file
Hope it helps !.
On Tue, Nov 29, 2011 at 8:03 PM, Hoot Thompson <[EMAIL PROTECTED]> wrote:
> I'm trying to prove that my cluster will in fact support multiple reducers,
> the wordcount example doesn't seem to spawn more that one (1). Is that
> correct? Is there a sure fire way to prove my cluster is configured
> correctly in terms of launching the maximum (say two per node) number of
> mappers and reducers?