Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Can not follow Single Node Setup example.


Copy link to this message
-
Re: Can not follow Single Node Setup example.
Not an issue. See, there are 2 types of modes when you say "single node
setup" : standalone(runs on your local FS) and pseudo distributed(runs on
HDFS). You are probably working on standalone setup. If you need some help
on pseudo setup you might this link helpful :
http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s

I have tried to explain the procedure.

Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:41 PM, Peng Yu <[EMAIL PROTECTED]> wrote:

> I just started learning hadoop. And I followed
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
> DataNode mentioned in this document? Do you have a list of working
> step by step instructions so that I run hadoop without anything
> previously installed? Thanks.
>
> On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <[EMAIL PROTECTED]>
> wrote:
> > Is your DataNode running?
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <[EMAIL PROTECTED]> wrote:
> >>
> >> Hi,
> >>
> >> Here is what I got. Is there anything wrong?
> >>
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/  /input/
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >>         at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >>         at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>         at java.lang.reflect.Method.invoke(Method.java:597)
> >>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >>         at java.security.AccessController.doPrivileged(Native Method)
> >>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>         at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >>         at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >>         at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>         at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>         at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>         at java.lang.reflect.Method.invoke(Method.java:597)
> >>         at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >>         at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >>         at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >>         at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >>         at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >>         at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >>         at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> >> bad datanode[0] nodes == null
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> >> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> >> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB