A particular query that I run fails with the following error:
Job 18: Map: 2 Reduce: 1 Cumulative CPU: 3.67 sec HDFS Read: 0 HDFS
Write: 0 SUCCESS
Exception in thread "main"
org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
counters: 121 max=120
Googling suggests that I should increase "mapreduce.job.counters.limit".
And that the number of counters a job uses
has an effect on the memory used by the JobTracker, so I shouldn't increase
this number too high.
Is there a rule of thumb for what this number should be as a function of
JobTracker memory? That is should I be cautious and
increase by 5 at a time, or could I just double it?
Alexander Alten-Lorenz 2013-01-02, 11:20
Krishna Rao 2013-01-04, 14:28