Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive, mail # user -


Copy link to this message
-
Krishna Rao 2012-12-31, 15:45
A particular query that I run fails with the following error:

***
Job 18: Map: 2  Reduce: 1   Cumulative CPU: 3.67 sec   HDFS Read: 0 HDFS
Write: 0 SUCCESS
Exception in thread "main"
org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
counters: 121 max=120
 ...
***

Googling suggests that I should increase "mapreduce.job.counters.limit".
And that the number of counters a job uses
has an effect on the memory used by the JobTracker, so I shouldn't increase
this number too high.

Is there a rule of thumb for what this number should be as a function of
JobTracker memory? That is should I be cautious and
increase by 5 at a time, or could I just double it?

Cheers,

Krishna