Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> File X could only be replicated to 0 nodes instead of 1


Copy link to this message
-
Re: File X could only be replicated to 0 nodes instead of 1
Resolution: I had some part of the installation out of order. A working
installation script for v1.4.3 is at
https://github.com/medined/accumulo-at-home<https://github.com/medined/accumulo-at-home/tree/master/1.4.3>
in
the v1.4.3 directory.
On Sat, May 11, 2013 at 11:12 AM, Eric Newton <[EMAIL PROTECTED]> wrote:

> Check your datanode logs... it's probably not running.
>
> -Eric
>
>
> On Fri, May 10, 2013 at 1:53 PM, David Medinets <[EMAIL PROTECTED]
> >wrote:
>
> > I tried an install of 1.4.3 and am seeing the following message when I
> run
> > 'accumulo init' without any logs being generated. Both hadoop and
> zookeeper
> > seem to be running OK. Any ideas where I should look to resolve this?
> >
> > 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer Exception:
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could only
> be
> > replicated to 0 nodes, instead of 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> >     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:616)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:416)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> >
> >     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:616)
> >     at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >     at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB