Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> How to remove all traces of a dropped table.


Copy link to this message
-
Re: How to remove all traces of a dropped table.
David,

  I have only seen this once before and I actually had to drop the META
table and rebuild it with HBCK.  After that the import worked. I am pretty
sure I cleaned up the ZK as well. It was very strange indeed.  If you can
reproduce this can you open a JIRA as this is no longer a one off scenario.
On Apr 25, 2013 9:28 AM, "Jean-Marc Spaggiari" <[EMAIL PROTECTED]>
wrote:

> Hi David,
>
> After you dropped your table, did you looked into the ZK server to see
> if all nodes related to this table got removed to?
>
> Also, have you tried to run HBCK after the drop to see if you system if
> fine?
>
> JM
>
> 2013/4/16 David Koch <[EMAIL PROTECTED]>:
> > Hello,
> >
> > We had problems with not being able to scan over a large (~8k regions)
> > table so we disabled and dropped it and decided to re-import data from
> > scratch into a table with the SAME name. This never worked and I list
> some
> > log extracts below.
> >
> > The only way to make the import go through was to import into a table
> with
> > a different name. Hence my question:
> >
> > How do I remove all traces of a table which was dropped? Our cluster
> > consists of 30 machines, running CDH4.0.1 with HBase 0.92.1.
> >
> > Thank you,
> >
> > /David
> >
> > Log stuff:
> >
> > The Mapper job reads text and the output are Puts. A couple of minutes
> into
> > the job it fails with the following message in the task log:
> >
> > 2013-04-16 17:11:16,918 WARN
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> > Encountered problems when prefetch META table:
> > java.io.IOException: HRegionInfo was null or empty in Meta for my_table,
> > row=my_table,\xC1\xE7T\x01a8OM\xB0\xCE/\x97\x88"\xB7y,99999999999999
> >
> > <repeat 9 times>
> >
> > 2013-04-16 17:11:16,924 INFO org.apache.hadoop.mapred.TaskLogsTruncater:
> > Initializing logs' truncater with mapRetainSize=-1 and
> reduceRetainSize=-1
> > 2013-04-16 17:11:16,926 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException
> > as:jenkins (auth:SIMPLE) cause:java.io.IOException: HRegionInfo was null
> or
> > empty in .META.,
> >
> row=keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
> >
> my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
> > 2013-04-16 17:11:16,926 WARN org.apache.hadoop.mapred.Child: Error
> running
> > child
> > java.io.IOException: HRegionInfo was null or empty in .META.,
> >
> row=keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
> >
> my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
> >     at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:957)
> >     at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
> >     at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1524)
> >     at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
> >     at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:943)
> >     at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:820)
> >     at org.apache.hadoop.hbase.client.HTable.put(HTable.java:795)
> >     at
> >
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:121)
> >     at
> >
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
> >     at