Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> number of reducers


+
jamal sasha 2012-11-20, 19:38
+
Bejoy KS 2012-11-20, 20:09
+
Kartashov, Andy 2012-11-20, 21:50
+
alxsss@... 2012-11-20, 22:00
Copy link to this message
-
Re: number of reducers
Awesome thanks . Works great now

On Tuesday, November 20, 2012, Bejoy KS <[EMAIL PROTECTED]> wrote:
> Hi Sasha
>
> By default the number or reducers are set to be 1. If you want more you
need to specify it as
>
> hadoop jar myJar.jar myClass -D mapred.reduce.tasks=20 ...
>
> Regards
> Bejoy KS
>
> Sent from handheld, please excuse typos.
> ________________________________
> From: jamal sasha <[EMAIL PROTECTED]>
> Date: Tue, 20 Nov 2012 14:38:54 -0500
> To: <[EMAIL PROTECTED]>
> ReplyTo: [EMAIL PROTECTED]
> Subject: number of reducers
>
>
> Hi,
>
>   I wrote a simple map reduce job in hadoop streaming.
>
>
>
> I am wondering if I am doing something wrong ..
>
> While number of mappers are projected to be around 1700.. reducers.. just
1?
>
> It’s couple of TB’s worth of data.
>
> What can I do to address this.
>
> Basically mapper looks like this
>
>
>
> For line in sys.stdin:
>
>     Print line
>
>
>
> Reducer
>
> For line in sys.stdin:
>
>     New_line = process_line(line)
>
>     Print new_line
>
>
>
>
>
> Thanks
>
>
>
+
Harsh J 2012-11-21, 04:08
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB