Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive, mail # user - Job counters limit exceeded exception


+
Krishna Rao 2013-01-02, 09:35
+
Alexander Alten-Lorenz 2013-01-02, 11:20
Copy link to this message
-
Re: Job counters limit exceeded exception
Krishna Rao 2013-01-04, 14:28
I ended up increasing the counters limit to 130 which solved my issue.

Do you know of any good sources to learn how to decipher hive's EXPLAIN?

Cheers,

Krishna
On 2 January 2013 11:20, Alexander Alten-Lorenz <[EMAIL PROTECTED]> wrote:

> Hi,
>
> These happens when operators are used in queries (Hive Operators). Hive
> creates 4 counters per operator, max upto 1000, plus a few additional
> counters like file read/write, partitions and tables. Hence the number of
> counter required is going to be dependent upon the query.
>
> Using "EXPLAIN EXTENDED" and "grep -ri operators | wc -l" print out the
> used numbers of operators. Use this value to tweak the MR settings
> carefully.
>
> Praveen has a good explanation 'bout counters online:
>
> http://www.thecloudavenue.com/2011/12/limiting-usage-counters-in-hadoop.html
>
> Rule of thumb for Hive:
> count of operators * 4 + n (n for file ops and other stuff).
>
> cheers,
>  Alex
>
>
> On Jan 2, 2013, at 10:35 AM, Krishna Rao <[EMAIL PROTECTED]> wrote:
>
> > A particular query that I run fails with the following error:
> >
> > ***
> > Job 18: Map: 2  Reduce: 1   Cumulative CPU: 3.67 sec   HDFS Read: 0 HDFS
> > Write: 0 SUCCESS
> > Exception in thread "main"
> > org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
> > counters: 121 max=120
> > ...
> > ***
> >
> > Googling suggests that I should increase "mapreduce.job.counters.limit".
> > And that the number of counters a job uses
> > has an effect on the memory used by the JobTracker, so I shouldn't
> increase
> > this number too high.
> >
> > Is there a rule of thumb for what this number should be as a function of
> > JobTracker memory? That is should I be cautious and
> > increase by 5 at a time, or could I just double it?
> >
> > Cheers,
> >
> > Krishna
>
> --
> Alexander Alten-Lorenz
> http://mapredit.blogspot.com
> German Hadoop LinkedIn Group: http://goo.gl/N8pCF
>
>