Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Re: Can not follow Single Node Setup example.


Copy link to this message
-
Re: Can not follow Single Node Setup example.
Mohammad Tariq 2013-06-27, 17:40
No. This means that you are trying to copy an entire directory instead of a
file. Do this :
bin/hadoop fs -put conf/  /input/

Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <[EMAIL PROTECTED]> wrote:

> Hi,
>
> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> put: Target input/conf is a directory
>
> I get the above output. Is it the correct output? Thanks.
>
> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <[EMAIL PROTECTED]>
> wrote:
> > It is looking for a file within your login folder
> > /user/py/input/conf
> >
> > You are running your job form
> > hadoop/bin
> > and I think the hadoop job will is looking for files in the current
> folder.
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <[EMAIL PROTECTED]> wrote:
> >>
> >> Hi,
> >>
> >> Here are what I have.
> >>
> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> CHANGES.txt  README.txt  c++      hadoop-ant-1.1.2.jar
> >> hadoop-examples-1.1.2.jar     hadoop-tools-1.1.2.jar  ivy.xml  logs
> >> src
> >> LICENSE.txt  bin         conf     hadoop-client-1.1.2.jar
> >> hadoop-minicluster-1.1.2.jar  input                   lib      sbin
> >> webapps
> >> NOTICE.txt   build.xml   contrib  hadoop-core-1.1.2.jar
> >> hadoop-test-1.1.2.jar         ivy                     libexec  share
> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> capacity-scheduler.xml  core-site.xml  fair-scheduler.xml
> >> hadoop-policy.xml  hdfs-site.xml  mapred-queue-acls.xml
> >> mapred-site.xml
> >>
> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <[EMAIL PROTECTED]>
> >> wrote:
> >> > Basically whether this step worked or not:
> >> >
> >> > $ cp conf/*.xml input
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <
> [EMAIL PROTECTED]>
> >> > wrote:
> >> >>
> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
> >> >> node
> >> >> setup) that you are job needs?
> >> >>
> >> >> Regards,
> >> >> Shahab
> >> >>
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <[EMAIL PROTECTED]>
> wrote:
> >> >>>
> >> >>> Hi,
> >> >>>
> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >>>
> >> >>> I followed the above instructions. But I get the following errors.
> >> >>> Does anybody know what is wrong? Thanks.
> >> >>>
> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >>>
> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >>> native-hadoop library for your platform... using builtin-java
> classes
> >> >>> where applicable
> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >> >>> loaded
> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >> >>> process : 2
> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> area
> >> >>>
> >> >>>
> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >>> java.io.IOException: Not a file:
> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >>>         at
> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >>>         at
> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >>>         at
> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >>>         at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >>>         at