Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Waiting for accumulo to be initialized


Copy link to this message
-
Re: Waiting for accumulo to be initialized
Well, it was a test data so saving it wasn't high priority but a 'nice to
have'. Prior to asking the question here, I checked out this blog
http://apache-accumulo.1065345.n5.nabble.com/Re-init-Accumulo-over-existing-installation-td345.html
.
So I knew data would be lost.

The reason I ask about saving data is because we are not quite sure why
zookeeper got hosed in the first place and if this issue happened in Prod I
like to have some suggestions handy for saving data.

Responding inline...
Zookeeper crashing (or even `kill -9`ing) should have no effect on Hadoop.
Did Hadoop come up correctly before you tried to restart Accumulo?
-- Yes.

Did you then do the `hadoop namenode -format` and expect to keep your data?
If so, lesson learned?
-- Prior to trying hadoop reformat (since I knew it be destructive) I tried
zookeeper to stop cleanly - hoping that might clean up something - clearly
not. I am fairly new to this so definitely lesson learned.

On Wed, Mar 27, 2013 at 3:40 PM, Josh Elser <[EMAIL PROTECTED]> wrote:

>  First off, I'm sorry about you losing data. I thought you recognized that
> this would be destructive on your data reading that link you sent out. I
> wasn't really advising you from a "saving data" standpoint.
>
> Zookeeper crashing (or even `kill -9`ing) should have no effect on Hadoop.
> Did Hadoop come up correctly before you tried to restart Accumulo? Did you
> then do the `hadoop namenode -format` and expect to keep your data? If so,
> lesson learned?
>
>
>
> On 3/27/13 3:18 PM, Aji Janis wrote:
>
> Eric and Josh thanks for all your feedback. We ended up *loosing all our
> accumulo data* because I had to reformat hadoop. Here is in a nutshell
> what I did:
>
>
>    1. Stop accumulo
>    2. Stop hadoop
>    3. On hadoop master and all datanodes, from dfs.data.dir
>    (hdfs-site.xml) remove everything under the data folder
>    4. On hadoop master, from dfs.name.dir (hdfs-site.xml) remove
>    everything under the name folder
>    5. As hadoop user, execute.../hadoop/bin/hadoop namenode -format
>    6. As hadoop user, execute.../hadoop/bin/start-all.sh ==> should
>    populate data/ and name/ dirs that was erased in steps 3, 4.
>    7. Initialized Accumulo - as accumulo user,  ../accumulo/bin/accumulo
>    init (I created a new instance)
>    8. Start accumulo
>
>  I was wondering if anyone had suggestions or thoughts on how I could
> have solved the original issue of accumulo waiting initialization without
> loosing my accumulo data? Is it possible to do so?
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB