Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Coprocessor POC


Copy link to this message
-
Re: Coprocessor POC
Himanshu Vashishtha 2012-07-30, 17:30
We should fix the reference then. Where did you read it?

On Mon, Jul 30, 2012 at 10:43 AM, Cyril Scetbon <[EMAIL PROTECTED]> wrote:
> Thanks, it's really better !
>
> I've read that by default it supports only Long values, that's why I was using a null ColumnInterpreter.
>
> Regards.
> Cyril SCETBON
>
> On Jul 30, 2012, at 5:56 PM, Himanshu Vashishtha <[EMAIL PROTECTED]> wrote:
>
>> On Mon, Jul 30, 2012 at 6:55 AM, Cyril Scetbon <[EMAIL PROTECTED]> wrote:
>>
>>> I've given the values returned by scan 'table' command in hbase shell in my first email.
>> Somehow I missed the scan result in your first email. So, can you pass
>> a LongColumnInterpreter instance instead of null?
>> See TestAggregateProtocol methods for usage.
>>
>> Thanks
>> Himanshu
>>
>>>
>>> Regards
>>> Cyril SCETBON
>>>
>>> On Jul 30, 2012, at 12:50 AM, Himanshu Vashishtha <[EMAIL PROTECTED]> wrote:
>>>
>>>> And also, what are your cell values look like?
>>>>
>>>> Himanshu
>>>>
>>>> On Sun, Jul 29, 2012 at 3:54 PM,  <[EMAIL PROTECTED]> wrote:
>>>>> Can you use 0.94 for your client jar ?
>>>>>
>>>>> Please show us the NullPointerException stack.
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>> On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
>>>>>>
>>>>>> The code I use is the following :
>>>>>>
>>>>>> package coreprocessor;
>>>>>>
>>>>>> import org.apache.hadoop.conf.Configuration;
>>>>>> import org.apache.hadoop.hbase.HBaseConfiguration;
>>>>>> import org.apache.hadoop.hbase.client.Scan;
>>>>>> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
>>>>>> import org.apache.hadoop.hbase.util.Bytes;
>>>>>>
>>>>>> public class AggregationClientTest {
>>>>>>
>>>>>> private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>>>>>> private static final byte[] CF = Bytes.toBytes("core");
>>>>>>
>>>>>> public static void main(String[] args) throws Throwable {
>>>>>>
>>>>>>     Configuration configuration = HBaseConfiguration.create();
>>>>>>
>>>>>>     configuration.setLong("hbase.client.scanner.caching", 1000);
>>>>>>     AggregationClient aggregationClient = new AggregationClient(
>>>>>>             configuration);
>>>>>>     Scan scan = new Scan();
>>>>>>     scan.addColumn(CF, Bytes.toBytes("value"));
>>>>>>     System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>>>>>>     System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>>>>>>     System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>>>>>> }
>>>>>> }
>>>>>>
>>>>>> The only one working is the rowCount function. For others I get a NPE error !
>>>>>> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
>>>>>>
>>>>>> ROW                                                  COLUMN+CELL
>>>>>> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
>>>>>>
>>>>>> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
>>>>>>
>>>>>> any idea why it doesn't work ?
>>>>>>
>>>>>> thanks
>>>>>> Cyril SCETBON
>>>>>>
>>>
>