Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >>


A particular query that I run fails with the following error:

***
Job 18: Map: 2  Reduce: 1   Cumulative CPU: 3.67 sec   HDFS Read: 0 HDFS
Write: 0 SUCCESS
Exception in thread "main"
org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
counters: 121 max=120
 ...
***

Googling suggests that I should increase "mapreduce.job.counters.limit".
And that the number of counters a job uses
has an effect on the memory used by the JobTracker, so I shouldn't increase
this number too high.

Is there a rule of thumb for what this number should be as a function of
JobTracker memory? That is should I be cautious and
increase by 5 at a time, or could I just double it?

Cheers,

Krishna
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB