I've reset my max counters as follows :
However, a job is failing (after reducers get to 100%!) at the very end,
due to exceeded counter limit. I've confirmed in my
code that indeed the correct counter parameter is being set.
My hypothesis: Somehow, the name node counters parameter is effectively
being transferred to slaves... BUT the name node *itself* hasn't updated its
maximum counter allowance, so it throws an exception at the end of the job,
that is, they dying message from hadoop is
" max counter limit 120 exceeded.... "
I've confirmed in my job that the counter parameter is correct, when the
job starts... However... somehow the "120 limit exceeded" exception is
This is in elastic map reduce, hadoop .20.205