-Re: max concurrent mapper/reducer in hadoop
Arun Murthy 2011-07-22, 16:50
Moving to mapreduce-dev@, bcc general@.
Yes, as described in the bug, the CS has high-ram jobs which is a
better model for shared multi-tenant clusters. The hadoop-0.20.203
release from Apache has the most current and tested version of the
Sent from my iPhone
On Jul 22, 2011, at 9:36 AM, Liang Chenmin <[EMAIL PROTECTED]> wrote:
> Hi all,
> I am using hadoop 0.20.2 cdh3 version. The old method to set max
> concurrent mapper/reducer in code no longer works. I saw a patch about this,
> but the current status is "won't fixed". Is there any update about this? I
> am using Fair Scheduler, should I use Capacity Scheduler instead?
> chenmin liang