Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> What is HIVE_PLAN?


+
Li jianwei 2013-06-07, 01:21
Copy link to this message
-
Re: What is HIVE_PLAN?
It's kept in JobConf as part of the plan file name.
Check the link below
http://hdfs-namenode:50030/jobconf.jsp?jobid=job_201306070901_0001

and  find  * hive.exec.plan  * and * hive.exec.scratchdir*
Do you have proper Read and Write  permissions ?
2013/6/7 Li jianwei <[EMAIL PROTECTED]>

> Hi, everyone:
> I have set up a hadoop cluster on THREE windows7 machines with Cygwin, and
> made several test, which were all passed, with hadoop-test-1.1.2.jar and
> hadoop-examples-1.1.2.jar.
> Then I tried to run Hive 0.10.0 on my cluster ( also in Cygwin ). I could
> create tables, show them, load data into them and "select *" from them. But
> when I tried "select count(*)" from my table, I've got the following
> exception. *My question is: what is that HIVE_PLANxxxxxx file? how is it
> created? where is it placed?*
> Would anyone give me some infomation?
> ......
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> Starting Job = job_201306070901_0001, Tracking URL > http://hdfs-namenode:50030/jobdetails.jsp?jobid=job_201306070901_0001
> Kill Command = C:\hadoop-1.1.2\\bin\hadoop.cmd job  -kill
> job_201306070901_0001
> Hadoop job information for Stage-1: number of mappers: 13; number of
> reducers: 1
> 2013-06-07 09:02:19,296 Stage-1 map = 0%,  reduce = 0%
> 2013-06-07 09:02:51,745 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201306070901_0001 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL:
> http://hdfs-namenode:50030/jobdetails.jsp?jobid=job_201306070901_0001
> Examining task ID: task_201306070901_0001_m_000014 (and more) from job
> job_201306070901_0001
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_201306070901_0001_m_000006
>
> URL:
>
> http://hdfs-namenode:50030/taskdetails.jsp?jobid=job_201306070901_0001&tipid=task_201306070901_0001_m_000006
> -----
> Diagnostic Messages for this Task:
> java.lang.RuntimeException: java.io.FileNotFoundException: *
> HIVE_PLANc632c8e2-257d-4cd4-b833-a09c7d249b2c* (Access is denied)
>         at
> org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:226)
>         at
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
>         at
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
>         at
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
>         at
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:536)
>         at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:197)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Unknown Source)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>         at org.apache.hadoop.mapred.Child.main(Child.java:249)
> Caused by: java.io.FileNotFoundException: *
> HIVE_PLANc632c8e2-257d-4cd4-b833-a09c7d249b2c* (Access is denied)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(Unknown Source)
>         at java.io.FileInputStream.<init>(Unknown Source)
>         at
> org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:217)
>         ... 12 more
>
>
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched:

Best wishs!
Fangkun.Cao
+
Li jianwei 2013-06-07, 06:13
+
FangKun Cao 2013-06-07, 07:15
+
Li jianwei 2013-06-07, 08:34
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB