Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Waiting for accumulo to be initialized


Copy link to this message
-
Re: Waiting for accumulo to be initialized
Just remove the directories configured for dfs.name.dir and dfs.data.dir
and run the `hadoop namenode -format` again.

On 3/27/13 11:31 AM, Aji Janis wrote:
> well... I found this in the datanode log
>
>  ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> java.io.IOException: Incompatible namespaceIDs in
> /opt/hadoop-data/hadoop/hdfs/data: namenode namespaceID = 2089335599;
> datanode namespaceID = 1868050007
>
>
>
>
> On Wed, Mar 27, 2013 at 11:23 AM, Eric Newton <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     "0 live nodes"  that will continue to be a problem.
>
>     Check the datanode logs.
>
>     -Eric
>
>
>     On Wed, Mar 27, 2013 at 11:20 AM, Aji Janis <[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>> wrote:
>
>
>         I removed everything
>         under /opt/hadoop-data/hadoop/hdfs/data/current/ because it
>         seemed like old files were hanging around and I had to remove
>         them before I can start re-initialization.
>
>
>         I didn't move anything to /tmp or try reboot.
>         my old accumulo instance had everything under /accumulo (in
>         hdfs) and its still there but i m guessing me deleting stuff
>         from hadoop-data has deleted a bunch of its stuff.
>
>         i tried to restart zookeeper and hadoop and it came up fine
>         but now my namenode url says there 0 live nodes (instead of 5
>         in my cluster). Doing a ps -ef | grep hadoop on each node in
>         cluster however shows that hadoop is running.... so i am not
>         sure what I messed up. Suggestions?
>
>         Have I lost accumulo for good? Should I just recreate the
>         instance?
>
>
>         On Wed, Mar 27, 2013 at 10:52 AM, Eric Newton
>         <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>
>             Your DataNode has not started and reported blocks to the
>             NameNode.
>
>             Did you store things (zookeeper, hadoop) in /tmp and
>             reboot?  It's a common thing to do, and it commonly
>             deletes everything in /tmp.  If that's the case, you will
>             need to shutdown hdfs and run:
>
>             $ hadoop namenode -format
>
>             And then start hdfs again.
>
>             -Eric
>
>
>             On Wed, Mar 27, 2013 at 10:47 AM, Aji Janis
>             <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>
>                 I see thank you. When I bring up hdfs (start-all from
>                 node with jobtracker) I see the following message on
>                 url: http://mynode:50070/dfshealth.jsp
>
>                 *Safe mode is ON. /The ratio of reported blocks 0.0000
>                 has not reached the threshold 0.9990. Safe mode will
>                 be turned off automatically./
>                 **2352 files and directories, 2179 blocks = 4531
>                 total. Heap Size is 54 MB / 888.94 MB (6%) *
>                 *
>                 *
>                 Whats going on here?
>
>
>
>                 On Wed, Mar 27, 2013 at 10:44 AM, Eric Newton
>                 <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
>                 wrote:
>
>                     This will (eventually) delete everything created
>                     by accumulo in hfds:
>
>                     $ hadoop fs -rmr /accumulo
>
>                     Accumulo will create a new area to hold your
>                     configurations.  Accumulo will basically abandon
>                     that old configuration.  There's a class that can
>                     be used to clean up old accumulo instances in
>                     zookeeper:
>
>                     $ ./bin/accumulo
>                     org.apache.accumulo.server.util.CleanZookeeper
>                     hostname:port
>
>                     Where "hostname:port" is the name of one of your
>                     zookeeper hosts.
>
>                     -Eric
>
>
>
>                     On Wed, Mar 27, 2013 at 10:29 AM, Aji Janis
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB