Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> direct Hfile Read and Writes


+
samar kumar 2012-06-27, 10:49
+
shixing 2012-06-27, 14:33
+
Jerry Lam 2012-06-27, 17:22
Copy link to this message
-
RE: direct Hfile Read and Writes
When there is a need of bulk loading huge amount of data into HBase at one time, it will be better go with the direct HFile write.
Here 1st using the MR framework HFiles are directly written (Into HDFS).. For this HBase provides the utility classes and the ImportTSV tool itself.
Then using the IncrementalLoadHFile , these files are loaded into the regions managed by RS.
Once these 2 steps are over client can read the data normally.
For loading these much data in a normal way of HTable#put() will take lot of time.
-Anoop-
________________________________________
From: Jerry Lam [[EMAIL PROTECTED]]
Sent: Wednesday, June 27, 2012 10:52 PM
To: [EMAIL PROTECTED]
Subject: Re: direct Hfile Read and Writes

Hi Samar:

I have used IncrementalLoadHFile successfully in the past. Basically, once
you have written hfile youreself you can use the IncrementalLoadHFile to
merge with the HFile currently managed by HBase. Once it is loaded to
HBase, the records in the increment hfile are accessible by clients.

HTH,

Jerry

On Wed, Jun 27, 2012 at 10:33 AM, shixing <[EMAIL PROTECTED]> wrote:

>  1. Since the data we might need would be distributed across regions how
>  would direct reading of Hfile be helpful.
>
> You can read the HFilePrettyPrinter, it shows how to create a HFile.Reader
> and use it to read the HFile.
> Or you can use the ./hbase org.apache.hadoop.hbase.io.hfile.HFile -p -f
> hdfs://xxxx/xxx/hfile to print some info to have a look.
>
>  2. Any use-case for direct writes of Hfiles. If we write Hfiles will
>  that data be accessible to the hbase shell.
>
> You can read the HFileOutputFormat, it shows how to create a HFile.Writer
> and use it to directly write kvs the HFile.
> If you want to read the data by hbase shell, you should firstly load the
> HFile to regionservers, details for bulkload
> http://hbase.apache.org/book.html#arch.bulk.load .
>
>
> On Wed, Jun 27, 2012 at 6:49 PM, samar kumar <[EMAIL PROTECTED]
> >wrote:
>
> > Hi Hbase Users,
> >  I have seen API's supporting HFile direct reads and write. I Do
> understand
> > it would create Hfiles in the location specified and it should be much
> > faster since we would skip all the look ups to ZK. catalog table . RS ,
> but
> > can anyone point me to a particular case when we would like to read/write
> > directly .
> >
> >
> >   1. Since the data we might need would be distributed across regions how
> >   would direct reading of Hfile be helpful.
> >   2. Any use-case for direct writes of Hfiles. If we write Hfiles will
> >   that data be accessible to the hbase shell.
> >
> >
> > Regards,
> > Samar
> >
>
>
>
> --
> Best wishes!
> My Friend~
>
+
samar kumar 2012-06-28, 08:40
+
Stack 2012-06-28, 22:39
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB