Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> ERROR 6015: During execution, encountered a Hadoop error | ERROR 1066: Unable to open iterator for alias grouped_records


Copy link to this message
-
Re: ERROR 6015: During execution, encountered a Hadoop error | ERROR 1066: Unable to open iterator for alias grouped_records
It seems like a problem with hadoop configuration that is probably not
specific to pig. Are you able to run other MR jobs, such as the wordcount
example ?

I searched for the exception string and found few matches including -

http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201007.mbox/%
[EMAIL PROTECTED]%3E

Thanks,
Thejas
On 12/13/10 6:09 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:

> Thanks Thejas,
>
> Reduce Task Logs:
>
> 2010-12-13 18:15:08,340 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=SHUFFLE, sessionId= 2010-12-13
> 18:15:09,062 INFO org.apache.hadoop.mapred.ReduceTask: ShuffleRamManager:
> MemoryLimit=141937872, MaxSingleShuffleLimit=35484468 2010-12-13 18:15:09,076
> INFO org.apache.hadoop.mapred.ReduceTask: attempt_201012121200_0018_r_000000_3
> Thread started: Thread for merging on-disk files 2010-12-13 18:15:09,076 INFO
> org.apache.hadoop.mapred.ReduceTask: attempt_201012121200_0018_r_000000_3
> Thread waiting: Thread for merging on-disk files 2010-12-13 18:15:09,081 INFO
> org.apache.hadoop.mapred.ReduceTask: attempt_201012121200_0018_r_000000_3
> Thread started: Thread for merging in memory files 2010-12-13 18:15:09,082
> INFO org.apache.hadoop.mapred.ReduceTask: attempt_201012121200_0018_r_000000_3
> Need another 2 map output(s) where 0 is already in progress 2010-12-13
> 18:15:09,083 INFO org.apache.hadoop.mapred.ReduceTask:
> attempt_201012121200_0018_r_000000_3 Scheduled 0 outputs (0 slow hosts and0
> dup hosts) 2010-12-13 18:15:09,083 INFO org.apache.hadoop.mapred.ReduceTask:
> attempt_201012121200_0018_r_000000_3 Thread started: Thread for polling Map
> Completion Events 2010-12-13 18:15:09,092 FATAL
> org.apache.hadoop.mapred.TaskRunner: attempt_201012121200_0018_r_000000_3
> GetMapEventsThread Ignoring exception : java.lang.NullPointerException at
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768) at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapComp
> letionEvents(ReduceTask.java:2683) at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(Reduce
> Task.java:2605) 2010-12-13 18:15:11,389 INFO
> org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with
> processName=CLEANUP, sessionId= 2010-12-13 18:15:12,107 INFO
> org.apache.hadoop.mapred.TaskRunner: Runnning cleanup for the task 2010-12-13
> 18:15:12,119 INFO org.apache.hadoop.mapred.TaskRunner:
> Task:attempt_201012121200_0018_r_000000_3 is done. And is in the process of
> commiting 2010-12-13 18:15:12,138 INFO org.apache.hadoop.mapred.TaskRunner:
> Task 'attempt_201012121200_0018_r_000000_3' done.
>
> ________________________________
> From: Thejas M Nair [mailto:[EMAIL PROTECTED]]
> Sent: Monday, December 13, 2010 7:32 PM
> To: [EMAIL PROTECTED]; Deepak Choudhary N (WT01 - Product Engineering
> Services)
> Subject: Re: ERROR 6015: During execution, encountered a Hadoop error | ERROR
> 1066: Unable to open iterator for alias grouped_records
>
> From the job tracker web UI, you should be able see the MR job run by this pig
> query.  If you follow the links, you should be able to find the reduce task
> logs.
>
> Thanks,
> Thejas
>
>
> On 12/13/10 5:11 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
>
> My Script:
>
> records = LOAD 'hdfs://hadoop.namenode:54310/data' USING PigStorage(',')
> AS (Year:int, Month:int,DayofMonth:int,DayofWeek:int);
> grouped_records = GROUP records BY Month;
> DUMP grouped_records;
>
> Hadoop Version: 0.20.2
> Pig Version: 0.7.0
>
> I couldn't find the reduce task logs. Where are they generated?
>
> Surprisingly, PIG jobs donot seem to generate any Hadoop (namenode, datanode,
> tasktracker etc) logs.
>
>
> -----Original Message-----
> From: Dmitriy Ryaboy [mailto:[EMAIL PROTECTED]]
> Sent: Monday, December 13, 2010 4:51 PM
> To: [EMAIL PROTECTED]
> Subject: Re: ERROR 6015: During execution, encountered a Hadoop error | ERROR
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB