Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - hbase corruption - missing region files in HDFS


+
Chris Waterson 2012-12-09, 04:30
+
Kevin Odell 2012-12-09, 23:00
+
Chris Waterson 2012-12-09, 23:29
+
Kevin Odell 2012-12-10, 01:08
+
Tom Brown 2012-12-10, 18:07
Copy link to this message
-
Re: hbase corruption - missing region files in HDFS
Chris Waterson 2012-12-10, 23:03
You bet; see below.  It's a Scala script, and will run as-is if you've got Scala installed.  It should be easy to translate to Java, however.

chris
#!/bin/sh
exec scala -cp `hbase classpath` $0 $@
!#

// Creates a file "/tmp/hfile.dat" that's an empty HFile.
import org.apache.hadoop.conf.Configuration                                                                  
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.Path
import org.apache.hadoop.hbase.io.hfile.HFile

object HFileTool {
  def main(args:Array[String]) = {
    val conf = new Configuration
    val path = new Path("file:///tmp/hfile.dat")
    val writer = HFile.getWriterFactory(conf).createWriter(path.getFileSystem(conf), path)
    writer.close
  }
}
On Dec 10, 2012, at 10:07 AM, Tom Brown <[EMAIL PROTECTED]> wrote:

> Chris,
>
> I really appreciate your detailed fix description!  I've run into
> similar problems (due to old hardware and bad sectors) and could never
> figure out how to fix a broken table. Hbck always seemed to just make
> things worse until I would give up and recreate the table.
>
> Can you publish your utility that you used to create valid/empty HFiles?
>
> --Tom
>
> On Sun, Dec 9, 2012 at 6:08 PM, Kevin O'dell <[EMAIL PROTECTED]> wrote:
>> Chris,
>>
>> Thank you for the very descriptive update.
>>
>> On Sun, Dec 9, 2012 at 6:29 PM, Chris Waterson <[EMAIL PROTECTED]> wrote:
>>
>>> Well, I upgraded to 0.92.2, since the version I was running on (0.92.1)
>>> didn't have those options for "hbck".
>>>
>>> That helped.
>>>
>>> It took me a while to realize that I had to make the root filesystem
>>> writable so that "hbck
>>> -repair" could create itself a directory.  So, once that was done, it at
>>> least ran through to completion.
>>>
>>> But the problem persisted in that there were blocks in META that didn't
>>> exist on the filesystem.  One poor region server was assigned the sad task
>>> of attempting to open the non-existent directory, which it slavishly
>>> reattempted again and again, filling its log with FileNotFoundException
>>> stack traces.
>>>
>>> For example,
>>>
>>> 2012-12-09 00:14:33,315 ERROR
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open
>>> of
>>> region=referrers,com.free-hdwallpapers.www/wallpapers/animals/mici/595718.jpg|com.free-hdwallpapers.www/wallpaper/animals/husky/270579,1354964606745.0c54fe59c58ddd6b34042ec98171bff7.
>>> java.io.FileNotFoundException: File does not exist:
>>> /hbase/referrers/2cb553c74d52ddcbf31940f6c7128c63/main/33f1fd9efb944c4e982ba719cd7dde84
>>> etc., etc.
>>>
>>> In particular, the directory above "/hbase/referrers/2cb553...c63" simply
>>> did not exist at all in HDFS.
>>>
>>> So I took matters into my own hands and created the missing
>>> "/hbase/referrers/2cb553...c63" directory, its subdirectory "main", and
>>> attempted to create a zero-length file "331fd9...e84".  This changed the
>>> firehose of exceptions from FileNotFoundException to CorruptHFileException.
>>>
>>> So, I wrote a small program to emit a valid, empty HFile, and proceeded to
>>> place these files at whatever places in HDFS that a FileNotFoundException
>>> was being thrown.  After creating three or four of them, the exceptions
>>> stopped.
>>>
>>> I then ran "hbck -repair" again, and upon completion it declared victory.
>>>
>>> Again, I suspect that I got myself into this problem because I ran a
>>> machine out of disk space.  It's likely that most folks are more clever
>>> than me, and so this problem hasn't arisen before. :)
>>>
>>>
>>>
>>>
>>> On Dec 9, 2012, at 3:00 PM, "Kevin O'dell" <[EMAIL PROTECTED]>
>>> wrote:
>>>
>>>> can you run hbase hbck -fixMeta -fixAssignments
>>>>
>>>> This should assign those region servers and fix the hole.
>>>>
>>>> On Sat, Dec 8, 2012 at 11:30 PM, Chris Waterson <[EMAIL PROTECTED]>
>>> wrote:
>>>>
>>>>> Hello!  I've gotten myself into trouble where I'm missing files on HDFS
>>>>> that HBase thinks ought to be there.  In particular, running "hbase
+
Kyle McGovern 2012-12-12, 05:26
+
Kyle McGovern 2012-12-10, 03:09
+
lars hofhansl 2012-12-11, 05:10