Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> resetting conf/ parameters in a life cluster.


Copy link to this message
-
Re: resetting conf/ parameters in a life cluster.
Jay,

Oddly, the counters limit changes (increases, anyway) needs to be
applied at the JT, TT and *also* at the client - to take real effect.

On Sat, Aug 18, 2012 at 8:31 PM, Jay Vyas <[EMAIL PROTECTED]> wrote:
> Hi guys:
>
> I've reset my max counters as follows :
>
> ./hadoop-site.xml:
>  <property><name>mapreduce.job.counters.limit</name><value>15000</value></property>
>
> However, a job is failing (after reducers get to 100%!) at the very end,
> due to exceeded counter limit.  I've confirmed in my
> code that indeed the correct counter parameter is being set.
>
> My hypothesis: Somehow, the name node counters parameter is effectively
> being transferred to slaves... BUT the name node *itself* hasn't updated its
> maximum counter allowance, so it throws an exception at the end of the job,
> that is, they dying message from hadoop is
>
> " max counter limit 120 exceeded.... "
>
> I've confirmed in my job that the counter parameter is correct, when the
> job starts... However... somehow the "120 limit exceeded" exception is
> still thrown.
>
> This is in elastic map reduce, hadoop .20.205
>
> --
> Jay Vyas
> MMSB/UCHC

--
Harsh J