Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # dev >> HBASE-1200 and HFOF patch


Copy link to this message
-
HBASE-1200 and HFOF patch
Hi,

HBASE-1200 adds this patch:

diff --git a/core/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java b/core/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
index 2c81723..9c8e53e 100644
--- a/core/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
+++ b/core/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
@@ -112,7 +112,10 @@ public class HFileOutputFormat extends FileOutputFormat<ImmutableBytesWritable,
 
       private void close(final HFile.Writer w) throws IOException {
         if (w != null) {
-          StoreFile.appendMetadata(w, System.currentTimeMillis(), true);
+          w.appendFileInfo(StoreFile.MAX_SEQ_ID_KEY,
+              Bytes.toBytes(System.currentTimeMillis()));
+          w.appendFileInfo(StoreFile.MAJOR_COMPACTION_KEY,
+              Bytes.toBytes(true));
           w.close();
         }
       }
I am wondering why this got lumped into the Bloom filter patch. And more so, what does it do? Why are bulk load files set with the Major Compaction flag? They can contain deletes, so this seems counter intuitive, no?

Lars