Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Re: Hadoop 2.2.0 from source configuration


+
Daniel Savard 2013-12-02, 16:44
+
Daniel Savard 2013-12-02, 17:10
Copy link to this message
-
Re: Hadoop 2.2.0 from source configuration
Hi Daniel,

I agree with you that 2.2 documents are very unfriendly.
In many documents, the change in 2.2 from 1.x is just a format.
There are still many documents to be converted. (ex. Hadoop Streaming)
Furthermore, there are a lot of dead links in documents.

I've been trying to fix dead links, convert 1.x documents, and update
deprecated instructions.
   https://issues.apache.org/jira/browse/HADOOP-9982
   https://issues.apache.org/jira/browse/MAPREDUCE-5636

I'll file a JIRA and try to update Single Node Setup document.

Thanks,
Akira

(2013/12/03 1:44), Daniel Savard wrote:
> André,
>
> good for you that greedy instructions on the reference page were enough
> to setup your cluster. However, read them again and see how many
> assumptions are made into them about what you are supposed to already
> know and should come without saying more about it.
>
> I did try the single node setup, it is worst than the cluster setup
> regarding the instructions. You are supposed to already have a near
> working system as far as I understand the instructions. It is assumed
> the HDFS is already setup and working properly. Try to find the
> instructions to setup HDFS for version 2.2.0 and you will end up with a
> lot of inappropriate instructions about previous version (some
> properties were renamed).
>
> It may appear hard at people to say this is toxic, but it is. The first
> place a newcomer will go is setup a single node. This will be his
> starting point and he will be left with a bunch of a priori and no clue.
>
> To go back to my very problem at this point:
>
> 13/12/02 11:34:07 WARN hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /test._COPYING_ could only be replicated to 0 nodes instead of
> minReplication (=1).  There are 1 datanode(s) running and no node(s) are
> excluded in this operation.
>      at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>      at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
>      at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>      at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>      at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>      at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:415)
>      at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>      at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>      at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>      at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>      at java.lang.reflect.Method.invoke(Method.java:606)
>      at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>      at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>      at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
>      at
+
Adam Kawa 2013-12-03, 18:55
+
Daniel Savard 2013-12-04, 02:10
+
Daniel Savard 2013-12-04, 03:02
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB