Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> resetting conf/ parameters in a life cluster.


+
Jay Vyas 2012-08-18, 15:01
Copy link to this message
-
Re: resetting conf/ parameters in a life cluster.
Jay,

Oddly, the counters limit changes (increases, anyway) needs to be
applied at the JT, TT and *also* at the client - to take real effect.

On Sat, Aug 18, 2012 at 8:31 PM, Jay Vyas <[EMAIL PROTECTED]> wrote:
> Hi guys:
>
> I've reset my max counters as follows :
>
> ./hadoop-site.xml:
>  <property><name>mapreduce.job.counters.limit</name><value>15000</value></property>
>
> However, a job is failing (after reducers get to 100%!) at the very end,
> due to exceeded counter limit.  I've confirmed in my
> code that indeed the correct counter parameter is being set.
>
> My hypothesis: Somehow, the name node counters parameter is effectively
> being transferred to slaves... BUT the name node *itself* hasn't updated its
> maximum counter allowance, so it throws an exception at the end of the job,
> that is, they dying message from hadoop is
>
> " max counter limit 120 exceeded.... "
>
> I've confirmed in my job that the counter parameter is correct, when the
> job starts... However... somehow the "120 limit exceeded" exception is
> still thrown.
>
> This is in elastic map reduce, hadoop .20.205
>
> --
> Jay Vyas
> MMSB/UCHC

--
Harsh J
+
Jay Vyas 2012-08-18, 15:16
+
Harsh J 2012-08-18, 15:48
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB