Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop, mail # user - Re: Can not follow Single Node Setup example.


Copy link to this message
-
Re: Can not follow Single Node Setup example.
Shahab Yunus 2013-06-26, 15:00
Basically whether this step worked or not:

$ cp conf/*.xml input

Regards,
Shahab
On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <[EMAIL PROTECTED]>wrote:

> Have you verified that the 'input' folder exists on the hdfs (singel node
> setup) that you are job needs?
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <[EMAIL PROTECTED]> wrote:
>
>> Hi,
>>
>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>
>> I followed the above instructions. But I get the following errors.
>> Does anybody know what is wrong? Thanks.
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> process : 2
>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> file: hdfs://localhost:9000/user/py/input/conf
>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>>         at
>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>>         at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>         at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>
>> --
>> Regards,
>> Peng
>>
>
>