Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # dev - Usage of block encoding in bulk loading


Copy link to this message
-
Usage of block encoding in bulk loading
Anoop Sam John 2012-05-11, 17:18
Hi Devs
              When the data is bulk loaded using HFileOutputFormat, we are not using the block encoding and the HBase handled checksum features I think..  When the writer is created for making the HFile, I am not seeing any such info passing to the WriterBuilder.
In HFileOutputFormat.getNewWriter(byte[] family, Configuration conf), we dont have these info and do not pass also to the writer... So those HFiles will not have these optimizations..

Later in LoadIncrementalHFiles.copyHFileHalf(), where we physically divide one HFile(created by the MR) iff it can not belong to just one region, I can see we pass the datablock encoding details and checksum details to the new HFile writer. But this step wont happen normally I think..

Correct me if my understanding is wrong pls...

Thanks
Anoop
+
Stack 2012-05-12, 04:38
+
Anoop Sam John 2012-05-13, 18:50