Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Keep Tables on Shutdown


Copy link to this message
-
Re: Keep Tables on Shutdown
Jonathan,

When you reformat the name node, you also need to wipe out the data node
directories. Otherwise they will complain about the name node ID being
wrong. You can find that message in the HDFS datanode logs.

Generally, it's a good idea to run checks on Zookeeper and HDFS independent
of Accumulo, just to make sure the dependencies are up and running
properly. $ZOOKEEPER_HOME/bin/zkCli.sh gives you commandline access to
Zookeeper that you can test it with (e.g. connect to it, write a node, read
a node, delete a node), and "hadoop fs" provides the same for HDFS. The
monitor page for hdfs (http://MASTER_NODE:50070) can also give you some
assurances that HDFS is working properly.

Cheers,
Adam

On Fri, Jul 27, 2012 at 11:38 AM, John Vines <[EMAIL PROTECTED]> wrote:

> Your hdfs isn't online. Specifically, you have no running datanodes. Check
> the logs to figure out why it's not coming online and remedy that before
> initializing Accumulo.
>
> Sent from my phone, so pardon the typos and brevity.
> On Jul 27, 2012 11:32 AM, "Jonathan Hsu" <[EMAIL PROTECTED]> wrote:
>
>> I tried again, reformatting the namenode first, and i got this error
>> while trying to start accumulo :
>>
>>
>> 27 11:29:08,295 [util.NativeCodeLoader] WARN : Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>> 27 11:29:08,352 [hdfs.DFSClient] WARN : DataStreamer Exception:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to
>> 0 nodes, instead of 1
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:740)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>  at $Proxy0.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>  at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>  at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> at $Proxy0.addBlock(Unknown Source)
>>  at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>>  at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>
>> 27 11:29:08,352 [hdfs.DFSClient] WARN : Error Recovery for block null bad
>> datanode[0] nodes == null
>> 27 11:29:08,352 [hdfs.DFSClient] WARN : Could not get block locations.
>> Source file "/accumulo/tables/!0/root_tablet/00000_00000.rf" - Aborting...
>> 27 11:29:08,353 [util.Initialize] FATAL: Failed to initialize filesystem
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to