Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - How to remove all traces of a dropped table.


Copy link to this message
-
How to remove all traces of a dropped table.
David Koch 2013-04-16, 16:04
Hello,

We had problems with not being able to scan over a large (~8k regions)
table so we disabled and dropped it and decided to re-import data from
scratch into a table with the SAME name. This never worked and I list some
log extracts below.

The only way to make the import go through was to import into a table with
a different name. Hence my question:

How do I remove all traces of a table which was dropped? Our cluster
consists of 30 machines, running CDH4.0.1 with HBase 0.92.1.

Thank you,

/David

Log stuff:

The Mapper job reads text and the output are Puts. A couple of minutes into
the job it fails with the following message in the task log:

2013-04-16 17:11:16,918 WARN
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
Encountered problems when prefetch META table:
java.io.IOException: HRegionInfo was null or empty in Meta for my_table,
row=my_table,\xC1\xE7T\x01a8OM\xB0\xCE/\x97\x88"\xB7y,99999999999999

<repeat 9 times>

2013-04-16 17:11:16,924 INFO org.apache.hadoop.mapred.TaskLogsTruncater:
Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-04-16 17:11:16,926 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:jenkins (auth:SIMPLE) cause:java.io.IOException: HRegionInfo was null or
empty in .META.,
row=keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
2013-04-16 17:11:16,926 WARN org.apache.hadoop.mapred.Child: Error running
child
java.io.IOException: HRegionInfo was null or empty in .META.,
row=keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
    at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:957)
    at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
    at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1524)
    at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
    at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:943)
    at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:820)
    at org.apache.hadoop.hbase.client.HTable.put(HTable.java:795)
    at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:121)
    at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
    at
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:533)
    at
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:88)
    at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:106)
    at
com.mycompany.data.tools.export.Export2HBase$JsonImporterMapper.map(Export2HBase.java:81)
    at
com.mycompany.data.tools.export.Export2HBase$JsonImporterMapper.map(Export2HBase.java:50)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.mapred.Child.main(Child.java:264)
2013-04-16 17:11:16,929 INFO org.apache.hadoop.mapred.Task: Runnning
cleanup for the task

The master server contains stuff like this:

WARN org.apache.hadoop.hbase.master.CatalogJanitor: REGIONINFO_QUALIFIER is
empty in
keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
We tried pre-splitting the table, same outcome. We deleted all Zookeeper
info in /hbase using zkcli, no help.
+
Jean-Marc Spaggiari 2013-04-25, 13:27
+
Kevin Odell 2013-04-25, 13:55
+
David Koch 2013-04-28, 18:24