Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> problem in testing coprocessor function


Copy link to this message
-
Re: problem in testing coprocessor endpoint
Kim, Asaf,

I don't know where this conception comes from that endpoint coprocessors
must be loaded globally, but it is simply not true.  If you would like to
see how endpoints are registered, see RegionCoprocessorHost.java:

  @Override
  public RegionEnvironment createEnvironment(Class<?> implClass,
      Coprocessor instance, int priority, int seq, Configuration conf) {
    // Check if it's an Endpoint.
    // Due to current dynamic protocol design, Endpoint
    // uses a different way to be registered and executed.
    // It uses a visitor pattern to invoke registered Endpoint
    // method.
    for (Class c : implClass.getInterfaces()) {
      if (CoprocessorProtocol.class.isAssignableFrom(c)) {
        region.registerProtocol(c, (CoprocessorProtocol)instance);
        break;
      }
    }
If you would like some trivial test code that demonstates invoking an
endpoint coprocessor configured on only a single table (coprocessor jar
loaded from HDFS), just let me know and I will send it to you.

--gh
On Fri, Jul 12, 2013 at 10:06 AM, Kim Chew <[EMAIL PROTECTED]> wrote:

> No, Endpoint processor can be deployed via configuration only.
> In hbase-site.xml, there should be an entry like this,
>
> <property>
>   <name>hbase.coprocessor.region.classes</name>
>   <value>myEndpointImpl</value>
> </property>
>
> Also, you have to let HBase know where to find your class, so in
> hbase-env.sh
>
>     export HBASE_CLASSPATH=${HBASE_HOME}/lib/AggregateCounterEndpoint.jar
>
>
> The trouble is you will need to restart RS. It would be nice to have APIs
> to load the Endpoint coprocessor dynamically.
>
> Kim
>
>
> On Fri, Jul 12, 2013 at 9:18 AM, Gary Helmling <[EMAIL PROTECTED]>
> wrote:
>
> > Endpoint coprocessors can be loaded on a single table.  They are no
> > different from RegionObservers in this regard.  Both are instantiated per
> > region by RegionCoprocessorHost.  You should be able to load the
> > coprocessor by setting it as a table attribute.  If it doesn't seem to be
> > loading, check the region server logs after you re-enable the table where
> > you have added it.  Do you see any log messages from
> RegionCoprocessorHost?
> >
> >
> > On Fri, Jul 12, 2013 at 4:33 AM, Asaf Mesika <[EMAIL PROTECTED]>
> > wrote:
> >
> > > You can't register and end point just for one table. It's like a stored
> > > procedure - you choose to run it and pass parameters to it.
> > >
> > > On Friday, July 12, 2013, ch huang wrote:
> > >
> > > > what your describe is how to load endpoint coprocessor for every
> region
> > > in
> > > > the hbase, what i want to do is just load it into my test table ,only
> > for
> > > > the regions of the table
> > > >
> > > > On Fri, Jul 12, 2013 at 12:07 PM, Asaf Mesika <[EMAIL PROTECTED]
> >
> > > > wrote:
> > > >
> > > > > The only way to register endpoint coprocessor jars is by placing
> them
> > > in
> > > > > lib dir if hbase and modifying hbase-site.xml to point to it under
> a
> > > > > property name I forgot at the moment.
> > > > > What you described is a way to register an Observer type
> coprocessor.
> > > > >
> > > > >
> > > > > On Friday, July 12, 2013, ch huang wrote:
> > > > >
> > > > > > i am testing coprocessor endpoint function, here is my testing
> > > process
> > > > > ,and
> > > > > > error i get ,hope any expert on coprocessor can help me out
> > > > > >
> > > > > >
> > > > > > # vi ColumnAggregationProtocol.java
> > > > > >
> > > > > > import java.io.IOException;
> > > > > > import org.apache.hadoop.hbase.ipc.CoprocessorProtocol;
> > > > > > // A sample protocol for performing aggregation at regions.
> > > > > > public interface ColumnAggregationProtocol
> > > > > > extends CoprocessorProtocol {
> > > > > > // Perform aggregation for a given column at the region. The
> > > > aggregation
> > > > > > // will include all the rows inside the region. It can be
> extended
> > to
> > > > > > // allow passing start and end rows for a fine-grained
> aggregation.
> > > > > >    public long sum(byte[] family, byte[] qualifier) throws