Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> How to remove all traces of a dropped table.


+
David Koch 2013-04-16, 16:04
Copy link to this message
-
Re: How to remove all traces of a dropped table.
Hi David,

After you dropped your table, did you looked into the ZK server to see
if all nodes related to this table got removed to?

Also, have you tried to run HBCK after the drop to see if you system if fine?

JM

2013/4/16 David Koch <[EMAIL PROTECTED]>:
> Hello,
>
> We had problems with not being able to scan over a large (~8k regions)
> table so we disabled and dropped it and decided to re-import data from
> scratch into a table with the SAME name. This never worked and I list some
> log extracts below.
>
> The only way to make the import go through was to import into a table with
> a different name. Hence my question:
>
> How do I remove all traces of a table which was dropped? Our cluster
> consists of 30 machines, running CDH4.0.1 with HBase 0.92.1.
>
> Thank you,
>
> /David
>
> Log stuff:
>
> The Mapper job reads text and the output are Puts. A couple of minutes into
> the job it fails with the following message in the task log:
>
> 2013-04-16 17:11:16,918 WARN
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Encountered problems when prefetch META table:
> java.io.IOException: HRegionInfo was null or empty in Meta for my_table,
> row=my_table,\xC1\xE7T\x01a8OM\xB0\xCE/\x97\x88"\xB7y,99999999999999
>
> <repeat 9 times>
>
> 2013-04-16 17:11:16,924 INFO org.apache.hadoop.mapred.TaskLogsTruncater:
> Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
> 2013-04-16 17:11:16,926 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:jenkins (auth:SIMPLE) cause:java.io.IOException: HRegionInfo was null or
> empty in .META.,
> row=keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
> my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
> 2013-04-16 17:11:16,926 WARN org.apache.hadoop.mapred.Child: Error running
> child
> java.io.IOException: HRegionInfo was null or empty in .META.,
> row=keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
> my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
>     at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:957)
>     at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
>     at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1524)
>     at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
>     at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:943)
>     at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:820)
>     at org.apache.hadoop.hbase.client.HTable.put(HTable.java:795)
>     at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:121)
>     at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
>     at
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:533)
>     at
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:88)
>     at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:106)
>     at
> com.mycompany.data.tools.export.Export2HBase$JsonImporterMapper.map(Export2HBase.java:81)
>     at
> com.mycompany.data.tools.export.Export2HBase$JsonImporterMapper.map(Export2HBase.java:50)
>     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
+
Kevin Odell 2013-04-25, 13:55
+
David Koch 2013-04-28, 18:24
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB