Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> METADATA recovery


Copy link to this message
-
Re: METADATA recovery
I know in trunk, the ability to run `./bin/accumulo rfile-info -d
/accumulo/path/to/rfile`. If that's unavailable, you can run
`./bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
-d /accumulo/path/to/rfile`. I'll defer to someone else for the walogs.

On Sun, Aug 19, 2012 at 12:50 AM, Denis <[EMAIL PROTECTED]> wrote:

> Hi.
>
> That is what am going to do.
> I still have terabytes of .rf files (!METADATA was the only table
> affected by the crash) and gigabytes of walog files and I am trying to
> extract info from them and insert into new database.
>
> Do you know any dumping tools for .rf and walog? (i started to create
> my own, but if there are existing ones, it could help to save the
> time).
> Am I right in understanding, that if all the content of .rf and walog
> will just be inserted into new db, VersioningIterator will remove all
> collision they may have ?
>
> On 8/19/12, John Vines <[EMAIL PROTECTED]> wrote:
> > When you have a namenode failure and you recover with teh Secondary
> > Namenode info, you're dealing with one level of potentially expired
> > pointers. On top of that, you have more layers of pointers WRT the root
> > tablet and !METADATA tablets. You can make attempts to recover, but what
> is
> > more apt to happen is you'll get a Root tablet up that has some, but not
> > all of the current !METADATA table files. And then the ones you get do
> get
> > up may or may not be pointing to the existing files for your tablets.
> >
> > What I'm ultimately trying to say is that you already lost some files,
> you
> > are more apt to lose more by trying to recover your old information
> instead
> > of taking what you have and starting over. I would suggest taking your
> > accumulo directory, moving it to accumulo_old or something along those
> > lines, reinstantiate an instance, and begin bulk importing the remaining
> > old information back into the new system.
> >
> > John
> >
> > On Sat, Aug 18, 2012 at 11:08 PM, Denis <[EMAIL PROTECTED]> wrote:
> >
> >> Hi.
> >>
> >> I have a trouble with my Accumulo installation.
> >> After hardware failure on NameNode, !METATABLE's root_tables is broken
> :(
> >>
> >> From "fsck /" output:
> >> ....
> >> /accumulo/tables/!0/root_tablet/A000ornd.rf: CORRUPT block
> >> blk_-8590712379082603283
> >> /accumulo/tables/!0/root_tablet/A000ornd.rf: MISSING 1 blocks of total
> >> size 896 B..
> >> ....
> >>
> >>
> >> What could you recommend to recover the data?
> >> Is it possible to reconstruct !METATABLE's root_tablet based on the
> >> rest of !METATABLE files ?
> >> Or is possible to reconstruct the whole !METATABLE based on th content
> >> of the all found tablets ?
> >> Are there any ready tools to do it ?
> >>
> >> Thanks.
> >>
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB