Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> problem in testing coprocessor function


Copy link to this message
-
Re: problem in testing coprocessor endpoint
Endpoint coprocessors can be loaded on a single table.  They are no
different from RegionObservers in this regard.  Both are instantiated per
region by RegionCoprocessorHost.  You should be able to load the
coprocessor by setting it as a table attribute.  If it doesn't seem to be
loading, check the region server logs after you re-enable the table where
you have added it.  Do you see any log messages from RegionCoprocessorHost?
On Fri, Jul 12, 2013 at 4:33 AM, Asaf Mesika <[EMAIL PROTECTED]> wrote:

> You can't register and end point just for one table. It's like a stored
> procedure - you choose to run it and pass parameters to it.
>
> On Friday, July 12, 2013, ch huang wrote:
>
> > what your describe is how to load endpoint coprocessor for every region
> in
> > the hbase, what i want to do is just load it into my test table ,only for
> > the regions of the table
> >
> > On Fri, Jul 12, 2013 at 12:07 PM, Asaf Mesika <[EMAIL PROTECTED]>
> > wrote:
> >
> > > The only way to register endpoint coprocessor jars is by placing them
> in
> > > lib dir if hbase and modifying hbase-site.xml to point to it under a
> > > property name I forgot at the moment.
> > > What you described is a way to register an Observer type coprocessor.
> > >
> > >
> > > On Friday, July 12, 2013, ch huang wrote:
> > >
> > > > i am testing coprocessor endpoint function, here is my testing
> process
> > > ,and
> > > > error i get ,hope any expert on coprocessor can help me out
> > > >
> > > >
> > > > # vi ColumnAggregationProtocol.java
> > > >
> > > > import java.io.IOException;
> > > > import org.apache.hadoop.hbase.ipc.CoprocessorProtocol;
> > > > // A sample protocol for performing aggregation at regions.
> > > > public interface ColumnAggregationProtocol
> > > > extends CoprocessorProtocol {
> > > > // Perform aggregation for a given column at the region. The
> > aggregation
> > > > // will include all the rows inside the region. It can be extended to
> > > > // allow passing start and end rows for a fine-grained aggregation.
> > > >    public long sum(byte[] family, byte[] qualifier) throws
> IOException;
> > > > }
> > > >
> > > >
> > > > # vi ColumnAggregationEndpoint.java
> > > >
> > > >
> > > > import java.io.FileWriter;
> > > > import java.io.IOException;
> > > > import java.util.ArrayList;
> > > > import java.util.List;
> > > > import org.apache.hadoop.hbase.CoprocessorEnvironment;
> > > > import org.apache.hadoop.hbase.KeyValue;
> > > > import org.apache.hadoop.hbase.client.Scan;
> > > > import org.apache.hadoop.hbase.coprocessor.BaseEndpointCoprocessor;
> > > > import
> > org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
> > > > import org.apache.hadoop.hbase.ipc.ProtocolSignature;
> > > > import org.apache.hadoop.hbase.regionserver.HRegion;
> > > > import org.apache.hadoop.hbase.regionserver.InternalScanner;
> > > > import org.apache.hadoop.hbase.util.Bytes;
> > > >
> > > > //Aggregation implementation at a region.
> > > >
> > > > public class ColumnAggregationEndpoint extends
> BaseEndpointCoprocessor
> > > >   implements ColumnAggregationProtocol {
> > > >      @Override
> > > >      public long sum(byte[] family, byte[] qualifier)
> > > >      throws IOException {
> > > >        // aggregate at each region
> > > >          Scan scan = new Scan();
> > > >          scan.addColumn(family, qualifier);
> > > >          long sumResult = 0;
> > > >
> > > >          CoprocessorEnvironment ce = getEnvironment();
> > > >          HRegion hr = ((RegionCoprocessorEnvironment)ce).getRegion();
> > > >          InternalScanner scanner = hr.getScanner(scan);
> > > >
> > > >          try {
> > > >            List<KeyValue> curVals = new ArrayList<KeyValue>();
> > > >            boolean hasMore = false;
> > > >            do {
> > > >          curVals.clear();
> > > >          hasMore = scanner.next(curVals);
> > > >          KeyValue kv = curVals.get(0);
> > > >          sumResult += Long.parseLong(Bytes.toString(kv.getValue()));
> > >
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB