Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # dev >> File X could only be replicated to 0 nodes instead of 1


+
David Medinets 2013-05-10, 17:53
+
John Vines 2013-05-10, 17:59
+
Eric Newton 2013-05-11, 15:12
+
David Medinets 2013-05-12, 03:08
+
John Vines 2013-05-12, 03:54
+
David Medinets 2013-05-12, 10:49
+
Josh Elser 2013-05-12, 14:29
Copy link to this message
-
Re: File X could only be replicated to 0 nodes instead of 1
That shouldn't do it since init will idle until hdfs is out of safe mode.

Sent from my phone, please pardon the typos and brevity.
On May 12, 2013 10:30 AM, "Josh Elser" <[EMAIL PROTECTED]> wrote:

> Looking at the last commit you made, I would've guessed it was the 'sleep
> 60' that you added after starting Hadoop.
>
> But that's just an outsider's glance :)
>
> On Sunday, May 12, 2013, David Medinets wrote:
>
> > I think ... and I am not sure about this at all .. that one of the
> accumulo
> > v1.6.0 processes was still running while I was deleting directories and
> > re-installing software. I changed how my installation process stops
> > processes - it now checks the output from 'jps' instead of relying on a
> pid
> > file. Since my install process wipes out related directories, getting the
> > processes to close cleanly is not important. So I simply 'kill -9' them.
> >
> >
> > On Sat, May 11, 2013 at 11:54 PM, John Vines <[EMAIL PROTECTED]> wrote:
> >
> > > Do you mind explicitly pointing out what was wrong and how you fixed it
> > so
> > > when people search for this issue they can easily find the resolution?
> > >
> > > Sent from my phone, please pardon the typos and brevity.
> > > On May 11, 2013 11:08 PM, "David Medinets" <[EMAIL PROTECTED]>
> > > wrote:
> > >
> > > > Resolution: I had some part of the installation out of order. A
> working
> > > > installation script for v1.4.3 is at
> > > > https://github.com/medined/accumulo-at-home<
> > > > https://github.com/medined/accumulo-at-home/tree/master/1.4.3>
> > > > in
> > > > the v1.4.3 directory.
> > > >
> > > >
> > > > On Sat, May 11, 2013 at 11:12 AM, Eric Newton <[EMAIL PROTECTED]
> >
> > > > wrote:
> > > >
> > > > > Check your datanode logs... it's probably not running.
> > > > >
> > > > > -Eric
> > > > >
> > > > >
> > > > > On Fri, May 10, 2013 at 1:53 PM, David Medinets <
> > > > [EMAIL PROTECTED]
> > > > > >wrote:
> > > > >
> > > > > > I tried an install of 1.4.3 and am seeing the following message
> > when
> > > I
> > > > > run
> > > > > > 'accumulo init' without any logs being generated. Both hadoop and
> > > > > zookeeper
> > > > > > seem to be running OK. Any ideas where I should look to resolve
> > this?
> > > > > >
> > > > > > 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer
> > > Exception:
> > > > > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > > > > > /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf
> could
> > > only
> > > > > be
> > > > > > replicated to 0 nodes, instead of 1
> > > > > >     at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> > > > > >     at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> > > > > >     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown
> Source)
> > > > > >     at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> > > > > >     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > > > > >     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > > > > >     at java.security.AccessController.doPrivileged(Native Method)
> > > > > >     at javax.security.auth.Subject.doAs(Subject.java:416)
> > > > > >     at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> > > > > >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > > > > >
> > > > > >     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> > > > > >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > > > > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB