Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # dev >> ColumnInterpreter and HbaseObjectWritable Was: HbaseObjectWritable and UnsupportedOperationException


Copy link to this message
-
Re: ColumnInterpreter and HbaseObjectWritable Was: HbaseObjectWritable and UnsupportedOperationException
Eclipse debugging is quite different than when one is running Hbase in real
I believe. In a standard hbase deployment, Long will be actively loaded in
the RS jvm with calls like Store constructor etc, for eg, much before this
endpoint call; but in eclipse debugger, we don't need these storefiles etc.
I don't believe debugging when it comes to class loading scenarios,
espcially for HBase: with all processes like master, RS, client running in
the same process.

Himanshu
On Fri, Apr 1, 2011 at 8:04 AM, Ted Yu <[EMAIL PROTECTED]> wrote:

> If you place a breakpoint at:
>     T temp;
>     InternalScanner scanner = getScanWithColAndQualifier(colFamily,
>         colQualifier, endRow, null);
> in my latest code, you would see what I meant. ColumnInterpreter provides
> all the concrete values.
>
>
> On Thu, Mar 31, 2011 at 11:59 PM, Himanshu Vashishtha <
> [EMAIL PROTECTED]> wrote:
>
>> Really! I think it doesn't make any difference as Long class is already
>> loaded (used by number of classes like HRInfo, HConstants, HFile etc). And
>> since these are static final fields (part of the class description in method
>> area). It should be there much before this Coprocessor loading.  I am still
>> learning this stuff, will be great to hear your/other opinion.
>>
>> Thanks
>> Himanshu
>>
>>
>>
>>
>> On Thu, Mar 31, 2011 at 10:18 PM, Ted Yu <[EMAIL PROTECTED]> wrote:
>>
>>> I found the one more benefit of using (enhanced) interpreter.
>>> We load AggregateProtocolImpl.class into CoprocessorHost. Interpreter
>>> feeds various values (such as Long.MIN_VALUE) of concrete type (Long) into
>>> AggregateProtocolImpl. This simplifies class loading for CoprocessorHost
>>>
>>> Cheers
>>>
>>>
>>> On Thu, Mar 31, 2011 at 11:37 AM, Ted Yu <[EMAIL PROTECTED]> wrote:
>>>
>>>> Renaming the subject to better reflect the nature of further discussion.
>>>>
>>>> There're two considerations for my current implementation attached to
>>>> HBASE-1512.
>>>> 1. User shouldn't modify HbaseObjectWritable directly for the new class
>>>> which is to be executed on region server.
>>>> 2. The reason for introducing interpreter is that we (plan to) store
>>>> objects of MeasureWritable, a relatively complex class, in hbase. Using
>>>> interpreter would give us flexibility in computing aggregates.
>>>>
>>>> Cheers
>>>>
>>>> On Thu, Mar 31, 2011 at 10:01 AM, Himanshu Vashishtha <
>>>> [EMAIL PROTECTED]> wrote:
>>>>
>>>>> Hello Ted,
>>>>> Did you add a new class: LongColumnInterpreter. Is this the new
>>>>> argument
>>>>> type you want to define to pass along rpcs. For all such "new" argument
>>>>> types, they should be supported/backed up with in the
>>>>> HbaseObjectWritable
>>>>> class to read/write it on wire. Do we really need it, just wondering.
>>>>>
>>>>> Himanshu
>>>>>
>>>>> On Thu, Mar 31, 2011 at 10:52 AM, Ted Yu <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>> > Hi,
>>>>> > When I experimented with HBASE-1512, I got the following from
>>>>> > HbaseObjectWritable:
>>>>> > java.lang.UnsupportedOperationException: No code for unexpected class
>>>>> >
>>>>> >
>>>>> org.apache.hadoop.hbase.client.coprocessor.AggregationClient$1LongColumnInterpreter
>>>>> >
>>>>> > I think there was initiative to support dynamic class registration in
>>>>> > HbaseObjectWritable
>>>>> >
>>>>> > If someone can enlighten me on the above, that would be great.
>>>>> >
>>>>>
>>>>
>>>>
>>>
>>
>