Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Re: Can not follow Single Node Setup example.


Copy link to this message
-
Re: Can not follow Single Node Setup example.
Shahab Yunus 2013-06-26, 15:51
It is looking for a file within your login folder
/user/*py*/input/conf

You are running your job form
hadoop/bin
and I think the hadoop job will is looking for files in the current folder.

Regards,
Shahab
On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <[EMAIL PROTECTED]> wrote:

> Hi,
>
> Here are what I have.
>
> ~/Downloads/hadoop-install/hadoop$ ls
> CHANGES.txt  README.txt  c++      hadoop-ant-1.1.2.jar
> hadoop-examples-1.1.2.jar     hadoop-tools-1.1.2.jar  ivy.xml  logs
> src
> LICENSE.txt  bin         conf     hadoop-client-1.1.2.jar
> hadoop-minicluster-1.1.2.jar  input                   lib      sbin
> webapps
> NOTICE.txt   build.xml   contrib  hadoop-core-1.1.2.jar
> hadoop-test-1.1.2.jar         ivy                     libexec  share
> ~/Downloads/hadoop-install/hadoop$ ls input/
> capacity-scheduler.xml  core-site.xml  fair-scheduler.xml
> hadoop-policy.xml  hdfs-site.xml  mapred-queue-acls.xml
> mapred-site.xml
>
> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <[EMAIL PROTECTED]>
> wrote:
> > Basically whether this step worked or not:
> >
> > $ cp conf/*.xml input
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <[EMAIL PROTECTED]>
> > wrote:
> >>
> >> Have you verified that the 'input' folder exists on the hdfs (singel
> node
> >> setup) that you are job needs?
> >>
> >> Regards,
> >> Shahab
> >>
> >>
> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <[EMAIL PROTECTED]> wrote:
> >>>
> >>> Hi,
> >>>
> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >>>
> >>> I followed the above instructions. But I get the following errors.
> >>> Does anybody know what is wrong? Thanks.
> >>>
> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >>> Warning: $HADOOP_HOME is deprecated.
> >>>
> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >>> native-hadoop library for your platform... using builtin-java classes
> >>> where applicable
> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >>> loaded
> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >>> process : 2
> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
> >>>
> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >>> file: hdfs://localhost:9000/user/py/input/conf
> >>> java.io.IOException: Not a file:
> hdfs://localhost:9000/user/py/input/conf
> >>>         at
> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >>>         at
> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >>>         at
> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >>>         at
> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>         at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >>>         at
> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >>>         at
> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >>>         at
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >>>         at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >>>         at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>         at