Offline the table. It may take some time for it to settle down. Start any
Shutdown the accumulo garbage collector:
$ pkill -f =gc
Grant the root user the write permissions the !METADATA table:
shell> grant -u root -t !METADATA Table.WRITE
Find your table id:
shell> tables -l
Now use the id to construct a delete command:
shell> deletemany -c file -s id; -e id<
It's important that you use your table id, and not "id" in the above
When you get tired of typing yes to each file, stop the shell and re-run it
shell> deletemany -f -c file -s id; -e id<
Now, go and move the directory in hdfs:
$ hadoop fs -mv /accumulo/tables/id /files-from-dead-table
You can bulk import the directories in /files-from-dead-table after you
bring the table back online with some appropriate splits.
The accumulo garbage collector will complain about missing files, so expect
those as warnings.
On Mon, Mar 4, 2013 at 10:53 AM, Corey Nolet <[EMAIL PROTECTED]> wrote:
> We have a sharded-event table that failed miserably when we accidentally
> tried to merge all of the tablets together. When starting accumulo, the
> monitor page says the event table (once having 43k tablets) now has 5
> tablets and 1.05B rows. There are 14.5k unassigned tablets, The tablet
> servers each have response times ranging from 10s to 1m until eventually
> they all die. We thought that it may have been our ulimit on the accumulo
> master being set to 1024 but raising it to 65535 didn't seem to have any
> immediate effects.
> The Accumulo shell freezes when we try to drop the event table and we've
> gotten a little experimental (trying to remove the references to the R
> files in the !METADATA table manually, removing the reference to the event
> table in zookeeper), etc..) Our experiments have mostly ended in
> permissions issues.
> With that, do you guys have any good techniques/tools for unlinking a
> busted and unresponsive table? All of the other tables/tablets seem to be
> doing just fine.
> Thanks in advance!