Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Exceeded limits on number of counters


Copy link to this message
-
Re: Exceeded limits on number of counters
Sorry, i pressed send by mistake on my mobile phone. JM already provided
the solution to you.
On Tue, Jul 2, 2013 at 2:59 PM, Anil Gupta <[EMAIL PROTECTED]> wrote:

> In mapreduce, there is a proper
>
> Best Regards,
> Anil
>
> On Jul 1, 2013, at 11:43 PM, Glen Arrowsmith <[EMAIL PROTECTED]>
> wrote:
>
> > Hi,
> > I'm getting an error on a map reduce task that use to work just fine for
> a few weeks.
> >
> > Exceeded limits on number of counters - Counters=120 Limit=120
> >
> > The full stderr output is at the bottom.
> >
> > I'm using Amazon's Elastic MapReduce.
> > The following command starts the job
> > elastic-mapreduce --create --name REGISTER table to S3 v2"
> --num-instances 6 --with-supported-products mapr-m5 --instance-type
> m1.xlarge --hive-script --arg s3://censored/dynamo-to-s3-v2.h --args
> -d,OUTPATH=s3://censored/out/,-d,INTABLE="REGISTER"
> >
> > From what I've read you can't change the counter limit without
> recompiling.
> >
> > Originally I had "fixed" this problem by upgrading from standard map
> reduce instances to mapr-m5 instances but that stopped working now for some
> reason.
> >
> > Thanks very much in advance for your help
> >
> > Glen Arrowsmith
> > Systems Architect
> >
> >
> > /mnt/var/lib/hadoop/steps/2/./hive-script:326: warning: Insecure world
> writable dir /home/hadoop/bin in PATH, mode 040757
> > Logging initialized using configuration in
> file:/home/hadoop/.versions/hive-0.8.1/conf/hive-log4j.properties
> > Hive history
> file=/mnt/var/lib/hive_081/tmp/history/hive_job_log_hadoop_201307020009_133883985.txt
> > OK
> > [snip]
> > Time taken: 0.389 seconds
> > OK
> > Time taken: 0.382 seconds
> > Total MapReduce jobs = 12
> > Launching Job 1 out of 12
> > Number of reduce tasks not specified. Defaulting to jobconf value of: 10
> > In order to change the average load for a reducer (in bytes):
> >  set hive.exec.reducers.bytes.per.reducer=<number>
> > In order to limit the maximum number of reducers:
> >  set hive.exec.reducers.max=<number>
> > In order to set a constant number of reducers:
> >  set mapred.reduce.tasks=<number>
> > Starting Job = job_201307020007_0001, Tracking URL > http://ip-10-151-78-231.ec2.internal:9100/jobdetails.jsp?jobid=job_201307020007_0001
> > Kill Command = /opt/mapr/hadoop/hadoop-0.20.2/bin/../bin/hadoop job
>  -Dmapred.job.tracker=maprfs:/// -kill job_201307020007_0001
> > Hadoop job information for Stage-12: number of mappers: 23; number of
> reducers: 10
> > 2013-07-02 00:09:30,325 Stage-12 map = 0%,  reduce = 0%
> > org.apache.hadoop.mapred.Counters$CountersExceededException: Error:
> Exceeded limits on number of counters - Counters=120 Limit=120
> >                at
> org.apache.hadoop.mapred.Counters$Group.getCounterForName(Counters.java:318)
> >                at
> org.apache.hadoop.mapred.Counters.findCounter(Counters.java:439)
> >                at
> org.apache.hadoop.mapred.Counters.getCounter(Counters.java:503)
> >                at
> org.apache.hadoop.hive.ql.exec.Operator.updateCounters(Operator.java:1150)
> >                at
> org.apache.hadoop.hive.ql.exec.ExecDriver.updateCounters(ExecDriver.java:1281)
> >                at
> org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.updateCounters(HadoopJobExecHelper.java:85)
> >                at
> org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:312)
> >                at
> org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:685)
> >                at
> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:494)
> >                at
> org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136)
> >                at
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
> >                at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
> >                at
> org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:47)
> > Ended Job = job_201307020007_0001 with exception

Thanks & Regards,
Anil Gupta
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB