David Parks 2013-02-09, 03:54
Nan Zhu 2013-02-09, 03:59
-RE: How can I limit reducers to one-per-node?
David Parks 2013-02-09, 04:24
Hmm, odd, I’m using AWS Mapreduce, and this property is already set to 1 on my cluster by default (using 15 m1.xlarge boxes which come with 3 reducer slots configured by default).
From: Nan Zhu [mailto:[EMAIL PROTECTED]]
Sent: Saturday, February 09, 2013 10:59 AM
To: [EMAIL PROTECTED]
Subject: Re: How can I limit reducers to one-per-node?
I think set tasktracker.reduce.tasks.maximum to be 1 may meet your requirement
School of Computer Science,
On Friday, 8 February, 2013 at 10:54 PM, David Parks wrote:
I have a cluster of boxes with 3 reducers per node. I want to limit a particular job to only run 1 reducer per node.
This job is network IO bound, gathering images from a set of webservers.
My job has certain parameters set to meet “web politeness” standards (e.g. limit connects and connection frequency).
If this job runs from multiple reducers on the same node, those per-host limits will be violated. Also, this is a shared environment and I don’t want long running network bound jobs uselessly taking up all reduce slots.
Nan Zhu 2013-02-09, 04:30
David Parks 2013-02-09, 04:46
Nan Zhu 2013-02-09, 04:59
Harsh J 2013-02-09, 05:18