Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - Waiting for accumulo to be initialized


Copy link to this message
-
Re: Waiting for accumulo to be initialized
Aji Janis 2013-03-27, 15:48
Actually, this guide explains that running ./hadoop namenode -format would
cause this issue
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#javaioioexception-incompatible-namespaceids
On Wed, Mar 27, 2013 at 11:31 AM, Aji Janis <[EMAIL PROTECTED]> wrote:

> well... I found this in the datanode log
>
>  ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> java.io.IOException: Incompatible namespaceIDs in
> /opt/hadoop-data/hadoop/hdfs/data: namenode namespaceID = 2089335599;
> datanode namespaceID = 1868050007
>
>
>
>
> On Wed, Mar 27, 2013 at 11:23 AM, Eric Newton <[EMAIL PROTECTED]>wrote:
>
>> "0 live nodes"  that will continue to be a problem.
>>
>> Check the datanode logs.
>>
>> -Eric
>>
>>
>> On Wed, Mar 27, 2013 at 11:20 AM, Aji Janis <[EMAIL PROTECTED]> wrote:
>>
>>>
>>> I removed everything under /opt/hadoop-data/hadoop/hdfs/data/current/
>>> because it seemed like old files were hanging around and I had to remove
>>> them before I can start re-initialization.
>>>
>>>
>>> I didn't move anything to /tmp or try reboot.
>>> my old accumulo instance had everything under /accumulo (in hdfs) and
>>> its still there but i m guessing me deleting stuff from hadoop-data has
>>> deleted a bunch of its stuff.
>>>
>>> i tried to restart zookeeper and hadoop and it came up fine but now my
>>> namenode url says there 0 live nodes (instead of 5 in my cluster). Doing a
>>> ps -ef | grep hadoop on each node in cluster however shows that hadoop is
>>> running.... so i am not sure what I messed up. Suggestions?
>>>
>>> Have I lost accumulo for good? Should I just recreate the instance?
>>>
>>>
>>> On Wed, Mar 27, 2013 at 10:52 AM, Eric Newton <[EMAIL PROTECTED]>wrote:
>>>
>>>> Your DataNode has not started and reported blocks to the NameNode.
>>>>
>>>> Did you store things (zookeeper, hadoop) in /tmp and reboot?  It's a
>>>> common thing to do, and it commonly deletes everything in /tmp.  If that's
>>>> the case, you will need to shutdown hdfs and run:
>>>>
>>>> $ hadoop namenode -format
>>>>
>>>> And then start hdfs again.
>>>>
>>>> -Eric
>>>>
>>>>
>>>> On Wed, Mar 27, 2013 at 10:47 AM, Aji Janis <[EMAIL PROTECTED]> wrote:
>>>>
>>>>> I see thank you. When I bring up hdfs (start-all from node with
>>>>> jobtracker) I see the following message on url:
>>>>> http://mynode:50070/dfshealth.jsp
>>>>>
>>>>> *Safe mode is ON. The ratio of reported blocks 0.0000 has not reached
>>>>> the threshold 0.9990. Safe mode will be turned off automatically.
>>>>> **2352 files and directories, 2179 blocks = 4531 total. Heap Size is
>>>>> 54 MB / 888.94 MB (6%) *
>>>>> *
>>>>> *
>>>>> Whats going on here?
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Mar 27, 2013 at 10:44 AM, Eric Newton <[EMAIL PROTECTED]>wrote:
>>>>>
>>>>>> This will (eventually) delete everything created by accumulo in hfds:
>>>>>>
>>>>>> $ hadoop fs -rmr /accumulo
>>>>>>
>>>>>> Accumulo will create a new area to hold your configurations.
>>>>>>  Accumulo will basically abandon that old configuration.  There's a class
>>>>>> that can be used to clean up old accumulo instances in zookeeper:
>>>>>>
>>>>>> $ ./bin/accumulo org.apache.accumulo.server.util.CleanZookeeper
>>>>>> hostname:port
>>>>>>
>>>>>> Where "hostname:port" is the name of one of your zookeeper hosts.
>>>>>>
>>>>>> -Eric
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Mar 27, 2013 at 10:29 AM, Aji Janis <[EMAIL PROTECTED]>wrote:
>>>>>>
>>>>>>> Thanks Eric. But shouldn't I be cleaning up something in the
>>>>>>> hadoop-data directory too? and Zookeeper?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Mar 27, 2013 at 10:27 AM, Eric Newton <[EMAIL PROTECTED]
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> To re-initialize accumulo, bring up zookeeper and hdfs.
>>>>>>>>
>>>>>>>> $ hadoop fs -rmr /accumulo
>>>>>>>> $ ./bin/accumulo init
>>>>>>>>
>>>>>>>> I do this about 100 times a day on my dev box. :-)
>>>>>>>>
>>>>>>>> -Eric
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Mar 27, 2013 at 10:10 AM, Aji Janis <[EMAIL PROTECTED]>wrote: