Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Expected file://// error


Copy link to this message
-
Re: Expected file://// error
What's the classpath of the java program submitting the job? It has to
have the configuration directory (e.g. /opt/hadoop/conf) in there or
it won't pick up the correct configs.

-Joey

On Sun, Jan 8, 2012 at 12:59 PM, Mark question <[EMAIL PROTECTED]> wrote:
> mapred-site.xml:
> <configuration>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>localhost:10001</value>
>  </property>
>  <property>
>     <name>mapred.child.java.opts</name>
>     <value>-Xmx1024m</value>
>  </property>
>  <property>
>     <name>mapred.tasktracker.map.tasks.maximum</name>
>     <value>10</value>
>  </property>
> </configuration>
>
>
> Command is running a script which runs a java program that submit two jobs
> consecutively insuring waiting for the first job ( is working on my laptop
> but on the cluster).
>
> On the cluster I get:
>
>>
>> hdfs://localhost:12123/tmp/hadoop-mark/mapred/system/job_201201061404_0003/job.jar,
>> > expected: file:///
>> >    at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
>> >    at
>> >
>> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
>> >    at
>> >
>> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
>> >    at
>> >
>> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
>> >    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:192)
>> >    at
>> > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1189)
>> >    at
>> > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1165)
>> >    at
>> > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1137)
>> >    at
>> >
>> org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:657)
>> >    at
>> > org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
>> >    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
>> >    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249)
>> >    at Main.run(Main.java:304)
>> >    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >    at Main.main(Main.java:53)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>
>
>
> The first job output is :
> folder>_logs ....
> folder>part-00000
>
> I'm set "folder" as input path to the next job, could it be from the "_logs
> ..." ? but again it worked on my laptop under hadoop-0.21.0. The cluster
> has hadoop-0.20.2.
>
> Thanks,
> Mark

--
Joseph Echeverria
Cloudera, Inc.
443.305.9434
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB