Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Hadoop 2.2.0 from source configuration


Copy link to this message
-
Re: Hadoop 2.2.0 from source configuration
Daniel,

 Apologies if you had a bad experience. If you can point them out to us, we'd be more than happy to fix it - alternately, we'd *love* it if you could help us improve docs too.

 Now, for the problem at hand: http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo is one place to look. Basically NN cannot find any datanodes. Anything in your NN logs to indicate trouble?

 Also, pls feel free to open liras with issues you find and we'll help.

thanks,
Arun

On Dec 2, 2013, at 8:44 AM, Daniel Savard <[EMAIL PROTECTED]> wrote:

> André,
>
> good for you that greedy instructions on the reference page were enough to setup your cluster. However, read them again and see how many assumptions are made into them about what you are supposed to already know and should come without saying more about it.
>
> I did try the single node setup, it is worst than the cluster setup regarding the instructions. You are supposed to already have a near working system as far as I understand the instructions. It is assumed the HDFS is already setup and working properly. Try to find the instructions to setup HDFS for version 2.2.0 and you will end up with a lot of inappropriate instructions about previous version (some properties were renamed).
>
> It may appear hard at people to say this is toxic, but it is. The first place a newcomer will go is setup a single node. This will be his starting point and he will be left with a bunch of a priori and no clue.
>
> To go back to my very problem at this point:
>
> 13/12/02 11:34:07 WARN hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
>     at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>     at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)

Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/

CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB