Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - AggregateProtocol Help


Copy link to this message
-
Re: AggregateProtocol Help
Himanshu Vashishtha 2012-01-02, 01:18
Hello Royston,

Sorry to hear that you are getting trouble while using Aggregation
functionalities.

557k rows seems to be a small table and a SocketTimeout does not seem to be
an ok response.
It will be good to know the region distribution as such. (how many regions?
Is it a full table scan?)

You are using the sum function; how are you using the ColumnInterpreter.
Can you enable the log level to debug to see why the RS is taking that long
to respond (more than 113 sec).
The 0 return value is the default result.

Thanks for trying this out.

Thanks,
Himanshu

On Sun, Jan 1, 2012 at 12:26 PM, Royston Sellman <
[EMAIL PROTECTED]> wrote:

> Hi Ted,
>
> I think 0 is the only value we ever see (I'll check tomorrow: the server
> is down right now). Our table has 557,000 rows. I'll try a much shorter
> table tomorrow.
>
> Yes, we have RS running on the NN, but it's a test cluster and we are used
> to it :)
>
> Do you think using AggregationProtocol is the best strategy for the case
> where we want to use basic SQL-style functions like SUM, AVG, STD, MIN,
> MAX? Do you think there is a better strategy?
>
> Many thanks,
> Royston
>
>
> On 1 Jan 2012, at 17:58, Ted Yu wrote:
>
> > Royston:
> > Happy New Year to you too.
> >
> >>> java.net.SocketTimeoutException: Call to namenode/10.0.0.235:60020failed
> on
> >
> > It seems the namenode above actually refers to a region server. This is a
> > little bit confusing :-)
> >
> > The sum value below is 0.
> > Have you ever seen a value greater than 0 ?
> >
> > How many rows are there in this CF:CQ ?
> > The timeout was reported earlier by other people where there're many rows
> > in the table.
> >
> > There is a JIRA to provide streaming support for coprocessor but the
> > development there has stalled.
> >
> > Cheers
> >
> > On Sun, Jan 1, 2012 at 9:35 AM, Royston Sellman <
> > [EMAIL PROTECTED]> wrote:
> >
> >> Hi Gary and Ted,
> >>
> >> Royston (Tom's colleague) here. Back onto this after the Christmas/New
> Year
> >> break.
> >>
> >> Many thanks for your help so far. We enabled our database via your
> >> hbase-site.xml mod and were able to move on. to other errors. But I
> think
> >> we
> >> are now actually getting an aggregation partially calculated on our
> table
> >> (this feels like progress). The details:
> >>
> >> On running our client we now get this exception:
> >> 11/12/31 17:51:09 WARN
> >> client.HConnectionManager$HConnectionImplementation: Error executing for
> >> row
> >>
> >>  java.util.concurrent.ExecutionException:
> >> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> >> attempts=10, exceptions:
> >> Sat Dec 31 17:41:30 GMT 2011,
> >> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@1fc4f0f8,
> >> java.net.SocketTimeoutException: Call to namenode/10.0.0.235:60020failed
> >> on
> >> socket timeout exception: java.net.SocketTimeoutException: 60000 millis
> >> timeout while waiting for channel to be ready for read. ch :
> >> java.nio.channels.SocketChannel[connected local=/10.0.0.235:59999
> >> remote=namenode/10.0.0.235:60020]
> >> (8 more of these, making for 10 tries)
> >> Sat Dec 31 17:51:09 GMT 2011,
> >> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@1fc4f0f8,
> >> java.net.SocketTimeoutException: Call to namenode/10.0.0.235:60020failed
> >> on
> >> socket timeout exception: java.net.SocketTimeoutException: 60000 millis
> >> timeout while waiting for channel to be ready for read. ch :
> >> java.nio.channels.SocketChannel[connected local=/10.0.0.235:59364
> >> remote=namenode/10.0.0.235:60020]
> >>
> >>       at
> >> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> >>       at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> >>       at
> >>
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.
> >> processExecs(HConnectionManager.java:1465)
> >>       at
> >> org.apache.hadoop.hbase.client.HTable.coprocessorExec(HTable.java:1555)
> >>       at
> >>
> >>
> org.apache.hadoop.hbase.client.coprocessor.AggregationClient.sum(Aggregation