Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> AggregateProtocol Help


+
Tom Wilcox 2011-12-22, 16:09
+
Ted Yu 2011-12-22, 17:03
+
Tom Wilcox 2011-12-23, 11:02
+
Ted Yu 2011-12-23, 15:04
+
Gary Helmling 2011-12-23, 18:05
+
Royston Sellman 2012-01-01, 17:35
+
Ted Yu 2012-01-01, 17:58
+
Royston Sellman 2012-01-01, 19:26
+
yuzhihong@... 2012-01-01, 19:53
+
Himanshu Vashishtha 2012-01-02, 01:18
+
Ted Yu 2012-01-02, 01:53
+
Gary Helmling 2012-01-02, 06:23
+
Royston Sellman 2012-01-03, 16:32
+
Ted Yu 2012-01-03, 17:09
+
Royston Sellman 2012-01-03, 17:48
+
Ted Yu 2012-01-03, 18:00
Copy link to this message
-
Re: AggregateProtocol Help
On Tue, Jan 3, 2012 at 11:00 AM, Ted Yu <[EMAIL PROTECTED]> wrote:

> My previous email might not be hitting the root cause.
> I think the following method in LCI may be giving you the null:
>
>  public Long getValue(byte[] colFamily, byte[] colQualifier, KeyValue kv)
>      throws IOException {
>    if (kv == null || kv.getValueLength() != Bytes.SIZEOF_LONG)
>      return null;
>    return Bytes.toLong(kv.getBuffer(), kv.getValueOffset());
>
> Look at the if statement above carefully.
> If it doesn't match how you store values in HBase, feel free to subclass
>

Yeah...And a null is returned from the Region (though the log says 0: using
different variables) resulting in a npe at the client side.
This is likely to be the root cause.
> LongColumnInterpreter and provide the correct interpretation.
>
>

> BTW you don't need to restart cluster just because you need to use your own
> interpreter :-)
>
> On Tue, Jan 3, 2012 at 9:48 AM, Royston Sellman <
> [EMAIL PROTECTED]> wrote:
>
> > Hi Ted,
> >
> > Here is the output. As you can see aClient is not nul:
> >
> >  AggregationClient aClient = new AggregationClient(conf);
> >   System.err.println("aClient: "+aClient);
> >
> > <<<    aClient:
> > org.apache.hadoop.hbase.client.coprocessor.AggregationClient@28787c16
> >
> > It will take us a little while to add log code to LCI... we have to edit
> > the
> > source, rebuild 0.92, redistribute round our cluster, restart ;)
> > We'll get back to you when this is done.
> >
> > Royston
> >
> > -----Original Message-----
> > From: Ted Yu [mailto:[EMAIL PROTECTED]]
> > Sent: 03 January 2012 17:10
> > To: [EMAIL PROTECTED]
> > Subject: Re: AggregateProtocol Help
> >
> > Royston:
> > Thanks for your effort trying to hunt down the problem.
> >
> > Can you add a log after this line to see if aClient is null ?
> >               AggregationClient aClient = new AggregationClient(conf);
> >
> > I was looking at LongColumnInterpreter.add() which is called by
> > aClient.sum()
> > Can you add a few log statements in LongColumnInterpreter.add() to see
> what
> > parameters are passed to it ?
> >
> > Cheers
> >
> > On Tue, Jan 3, 2012 at 8:32 AM, Royston Sellman <
> > [EMAIL PROTECTED]> wrote:
> >
> > > Hi Ted, Himanshu, Gary,
> > >
> > > Thanks again for your attention. I experimented with a shorter table
> > > and it looks like the timeout error was spurious...
> > >
> > > With the shorter table I now get an NPE when I call
> > > AggregationClient.sum().
> > > Here's the code snippet:
> > >
> > >                // Test the table
> > >                HTable table = new HTable(EDRP_TABLE);
> > >                Get get = new Get(Bytes.toBytes("row-aa"));
> > >                get.addColumn(Bytes.toBytes("EDRP"),
> > > Bytes.toBytes("advanceKWh"));
> > >                Result result = table.get(get);
> > >                byte [] val = result.getValue(Bytes.toBytes("EDRP"),
> > > Bytes.toBytes("advanceKWh"));
> > >                System.out.println("Row aa = " + Bytes.toString(val));
> > >
> > >                AggregationClient aClient = new AggregationClient(conf);
> > >                Scan scan = new Scan();
> > >                 scan.addColumn(EDRP_FAMILY, EDRP_QUALIFIER);
> > >                scan.setStartRow(Bytes.toBytes("row-ab"));
> > >                scan.setStopRow(Bytes.toBytes("row-az"));
> > >                System.out.println(Bytes.toString(EDRP_FAMILY) + ":" +
> > > Bytes.toString(EDRP_QUALIFIER));
> > >                 final ColumnInterpreter<Long, Long> ci = new
> > > LongColumnInterpreter();
> > >                 long sum=-1;
> > >                try {
> > >                        sum = aClient.sum(EDRP_TABLE, ci, scan);
> > >                } catch (Throwable e) {
> > >                        // TODO Auto-generated catch block
> > >                        e.printStackTrace();
> > >                }
> > >                System.out.println(sum);
> > >
> > > The first part is just to check that my table is OK. It prints the
+
Royston Sellman 2012-01-03, 18:42
+
Ted Yu 2012-01-03, 18:58
+
Ted Yu 2012-01-03, 21:31
+
Royston Sellman 2012-01-04, 11:43
+
Ted Yu 2012-01-04, 15:01
+
Royston Sellman 2012-01-04, 18:57
+
Himanshu Vashishtha 2012-01-03, 17:11
+
Royston Sellman 2012-01-03, 17:54
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB