The plan will be serialized to the default hdfs instance , and put in
So please have a check of the distributed cache local directory of every
Common like :
2013/6/7 Li jianwei <[EMAIL PROTECTED]>
> Hi FangKun:
> Thanks for your reply!
> I ran the "select count(*)" again, and check the JobConf, find the
> property you mentioned, they were as following:
> *hive.exec.plan *hdfs://
> *hive.exec.scratchdir */tmp/hive-cyg_server
> when hive was running, I browsed the HDFS filesystem, the file specified
> by *hive.exec.plan* was there with permission rwsr-xr-x, but I didn't
> find any file had "HIVE_PLAN" in its name under any subdir of *
> hive.exec.scratchdir*. I also set the permission of *hive.exec.scratchdir*to rwxrwxrwx.
> Is it not the problem in HDFS? According to the java exception, it is the
> native java method java.io.FileInputStream.open which can not access the
> file, which probably is in the local filesystem of the tasktracker node.
> Date: Fri, 7 Jun 2013 12:09:24 +0800
> Subject: Re: What is HIVE_PLAN?
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> It's kept in JobConf as part of the plan file name.
> Check the link below
> and find * hive.exec.plan * and * hive.exec.scratchdir*
> Do you have proper Read and Write permissions ?
> 2013/6/7 Li jianwei <[EMAIL PROTECTED]>
> Hi, everyone:
> I have set up a hadoop cluster on THREE windows7 machines with Cygwin, and
> made several test, which were all passed, with hadoop-test-1.1.2.jar and
> Then I tried to run Hive 0.10.0 on my cluster ( also in Cygwin ). I could
> create tables, show them, load data into them and "select *" from them. But
> when I tried "select count(*)" from my table, I've got the following
> exception. *My question is: what is that HIVE_PLANxxxxxx file? how is it
> created? where is it placed?*
> Would anyone give me some infomation?
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapred.reduce.tasks=<number>
> Starting Job = job_201306070901_0001, Tracking URL > http://hdfs-namenode:50030/jobdetails.jsp?jobid=job_201306070901_0001
> Kill Command = C:\hadoop-1.1.2\\bin\hadoop.cmd job -kill
> Hadoop job information for Stage-1: number of mappers: 13; number of
> reducers: 1
> 2013-06-07 09:02:19,296 Stage-1 map = 0%, reduce = 0%
> 2013-06-07 09:02:51,745 Stage-1 map = 100%, reduce = 100%
> Ended Job = job_201306070901_0001 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL:
> Examining task ID: task_201306070901_0001_m_000014 (and more) from job
> Task with the most failures(4):
> Task ID:
> Diagnostic Messages for this Task:
> java.lang.RuntimeException: java.io.FileNotFoundException: *
> HIVE_PLANc632c8e2-257d-4cd4-b833-a09c7d249b2c* (Access is denied)