Also generating random keys/partitions can be problematic. Although the problems are rare. A mapper can be restarted after it finishes successfully if the machine it was on goes down or has other problems so that the reducers and not able to get that mapper's output data. If this happens while some of the reducers have finished fetching it, but not all of them, and the new mapper partitions things differently some records may show up twice in your output and others not at all.
If you do something like random for the partitioning make sure that you use a constant seed so that it is deterministic.
On 4/27/12 4:24 AM, "Bejoy KS" <[EMAIL PROTECTED]> wrote:
A custom Paritioner class can control the flow of keys to the desired reducer. It gives you more control on which key to which reducer.
Sent from handheld, please excuse typos.
From: mete <[EMAIL PROTECTED]>
Date: Fri, 27 Apr 2012 09:19:21
To: <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: reducers and data locality
I have a lot of input splits (10k-50k - 128 mb blocks) which contains text
files. I need to process those line by line, then copy the result into
roughly equal size of "shards".
So i generate a random key (from a range of [0:numberOfShards]) which is
used to route the map output to different reducers and the size is more
I know that this is not really efficient and i was wondering if i could
somehow control how keys are routed.
For example could i generate the randomKeys with hostname prefixes and
control which keys are sent to each reducer? What do you think?