Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> understanding the client code


Copy link to this message
-
Re: understanding the client code
So how does thrift and avro fit into the picture?  (I believe I saw
references to that somewhere, are those alternate connection libs?)

I know protobuf is just generating types for various languages...

On Tue, May 29, 2012 at 10:26 AM, N Keywal <[EMAIL PROTECTED]> wrote:

> Hi,
>
> If you're speaking about preparing the query it's in HTable and
> HConnectionManager.
> If you're on the pure network level, then, on trunk, it's now done
> with a third party called protobuf.
>
> See the code from HConnectionManager#createCallable to see how it's used.
>
> Cheers,
>
> N.
>
> On Tue, May 29, 2012 at 4:15 PM, S Ahmed <[EMAIL PROTECTED]> wrote:
> > I'm looking at the client code here:
> >
> https://github.com/apache/hbase/tree/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client
> >
> > Is this the high level operations, and the actual sending of this data
> over
> > the network is done somewhere else?
> >
> > For example, during a PUT, you may want it to write to n nodes, where is
> > the code that does that? And the actual network connection etc?
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB