In MR2, to have more mappers executed per NM, your memory request for each
map should be set such that the NM's configured memory allowance can fit in
multiple requests. For example, if my NM memory is set to 16 GB assuming
just 1 NM in cluster, and I submit a job with mapreduce.map.memory.mb and
yarn.app.mapreduce.am.resource.mb both set to 1 GB, then the NM can execute
15 maps in parallel consuming upto 1 GB memory each (while using the
remaining 1 GB for the AM to coordinate those executions).
On Sat, Mar 16, 2013 at 10:16 AM, yypvsxf19870706 <[EMAIL PROTECTED]
> i think i have got it . Thank you.
> 发自我的 iPhone
> 在 2013-3-15，18:32，Zheyi RONG <[EMAIL PROTECTED]> 写道：
> Indeed you cannot explicitly set the number of mappers, but still you can
> gain some control over it, by setting mapred.max.split.size, or
> For example, if you have a file of 10GB (10737418240 B), you would like 10
> mappers, then each mapper has to deal with 1GB data.
> According to "splitsize = max(minimumSize, min(maximumSize, blockSize))",
> you can set mapred.min.split.size=1073741824 (1GB), i.e.
> $hadoop jar -Dmapred.min.split.size=1073741824 yourjar yourargs
> It is well explained in thread:
> On Fri, Mar 15, 2013 at 8:49 AM, YouPeng Yang <[EMAIL PROTECTED]>wrote: