Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # user >> Waiting for accumulo to be initialized


+
Aji Janis 2013-03-27, 14:10
+
Eric Newton 2013-03-27, 14:27
+
Aji Janis 2013-03-27, 14:29
+
Eric Newton 2013-03-27, 14:44
+
Aji Janis 2013-03-27, 14:47
+
Eric Newton 2013-03-27, 14:52
+
Aji Janis 2013-03-27, 15:20
+
Eric Newton 2013-03-27, 15:23
+
Aji Janis 2013-03-27, 15:31
+
Josh Elser 2013-03-27, 15:50
+
Aji Janis 2013-03-27, 19:18
+
Eric Newton 2013-03-27, 19:33
Copy link to this message
-
Re: Waiting for accumulo to be initialized
when you say " you can move the files aside in HDFS" .. which files are you
referring to? I have never set up zookeeper myself so I am not aware of all
the changes needed.

On Wed, Mar 27, 2013 at 3:33 PM, Eric Newton <[EMAIL PROTECTED]> wrote:

> If you lose zookeeper, you can move the files aside in HDFS, recreate your
> instance in zookeeper and bulk import all of the old files.  It's not
> perfect: you lose table configurations, split points and user permissions,
> but you do preserve most of the data.
>
> You can back up each of these bits of information periodically if you
> like.  Outside of the files in HDFS, the configuration information is
> pretty small.
>
> -Eric
>
>
>
> On Wed, Mar 27, 2013 at 3:18 PM, Aji Janis <[EMAIL PROTECTED]> wrote:
>
>> Eric and Josh thanks for all your feedback. We ended up *loosing all our
>> accumulo data* because I had to reformat hadoop. Here is in a nutshell
>> what I did:
>>
>>
>>    1. Stop accumulo
>>    2. Stop hadoop
>>    3. On hadoop master and all datanodes, from dfs.data.dir
>>    (hdfs-site.xml) remove everything under the data folder
>>    4. On hadoop master, from dfs.name.dir (hdfs-site.xml) remove
>>    everything under the name folder
>>    5. As hadoop user, execute.../hadoop/bin/hadoop namenode -format
>>    6. As hadoop user, execute.../hadoop/bin/start-all.sh ==> should
>>    populate data/ and name/ dirs that was erased in steps 3, 4.
>>    7. Initialized Accumulo - as accumulo user,  ../accumulo/bin/accumulo
>>    init (I created a new instance)
>>    8. Start accumulo
>>
>> I was wondering if anyone had suggestions or thoughts on how I could have
>> solved the original issue of accumulo waiting initialization without
>> loosing my accumulo data? Is it possible to do so?
>>
>
>
+
Eric Newton 2013-03-27, 20:19
+
Aji Janis 2013-03-27, 20:45
+
Krishmin Rai 2013-03-27, 21:00
+
Aji Janis 2013-03-28, 12:56
+
Josh Elser 2013-03-27, 19:40
+
Aji Janis 2013-03-27, 19:56
+
Aji Janis 2013-03-27, 15:48
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB