Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - understanding the client code


Copy link to this message
-
Re: understanding the client code
Suraj Varma 2012-06-01, 22:44
The way thrift and avro fits in here is ...

Thrift Client (your code) -> (thrift on the wire) -> Thrift Server
(provided by HBase) -> (uses HTable) -> HBase Cluster.

Same with Avro.

So - use HTable if you want to interact with the cluster using a Java
API ... use the others if you want non-Java clients to access HBase.

On Tue, May 29, 2012 at 7:33 AM, S Ahmed <[EMAIL PROTECTED]> wrote:
> So how does thrift and avro fit into the picture?  (I believe I saw
> references to that somewhere, are those alternate connection libs?)
>
> I know protobuf is just generating types for various languages...
>
> On Tue, May 29, 2012 at 10:26 AM, N Keywal <[EMAIL PROTECTED]> wrote:
>
>> Hi,
>>
>> If you're speaking about preparing the query it's in HTable and
>> HConnectionManager.
>> If you're on the pure network level, then, on trunk, it's now done
>> with a third party called protobuf.
>>
>> See the code from HConnectionManager#createCallable to see how it's used.
>>
>> Cheers,
>>
>> N.
>>
>> On Tue, May 29, 2012 at 4:15 PM, S Ahmed <[EMAIL PROTECTED]> wrote:
>> > I'm looking at the client code here:
>> >
>> https://github.com/apache/hbase/tree/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client
>> >
>> > Is this the high level operations, and the actual sending of this data
>> over
>> > the network is done somewhere else?
>> >
>> > For example, during a PUT, you may want it to write to n nodes, where is
>> > the code that does that? And the actual network connection etc?
>>