Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> BigDecimalColumnInterpreter


Copy link to this message
-
Re: BigDecimalColumnInterpreter
I added one review comment on
HBASE-6669<https://issues.apache.org/jira/browse/HBASE-6669>
.

Thanks Julian for reminding me.

On Wed, Sep 5, 2012 at 12:49 PM, Julian Wissmann
<[EMAIL PROTECTED]>wrote:

> I get supplied with doubles from sensors, but in the end I loose too much
> precision if I do my aggregations on double, otherwise I'd go for it.
> I use 0.92.1, from Cloudera CDH4.
> I've done some initial testing with LongColumnInterpreter on a dataset that
> I've generated, to do some testing and get accustomed to stuff, but that
> worked like a charm after some initial stupidity on my side.
> So now I'm trying to do some testing with the real data, which comes in as
> double and gets parsed to BigDecimal before writing.
>
> 2012/9/5 Ted Yu <[EMAIL PROTECTED]>
>
> > And your HBase version is ?
> >
> > Since you use Double.parseDouble(), looks like it would be more efficient
> > to develop DoubleColumnInterpreter.
> >
> > On Wed, Sep 5, 2012 at 12:07 PM, Julian Wissmann
> > <[EMAIL PROTECTED]>wrote:
> >
> > > Hi,
> > > the schema looks like this:
> > > RowKey: id,timerange_timestamp,offset (String)
> > > Qualifier: Offset (long)
> > > Timestamp: timestamp (long)
> > > Value:number (BigDecimal)
> > >
> > > Or as code when I read data from csv:byte[] value > > > Bytes.toBytes(BigDecimal.valueOf(Double.parseDouble(cData[2])));
> > >
> > > Cheers,
> > >
> > > Julian
> > >
> > > 2012/9/5 Ted Yu <[EMAIL PROTECTED]>
> > >
> > > > You haven't told us the schema of your table yet.
> > > > Your table should have column whose value can be interpreted by
> > > > BigDecimalColumnInterpreter.
> > > >
> > > > Cheers
> > > >
> > > > On Wed, Sep 5, 2012 at 9:17 AM, Julian Wissmann <
> > > [EMAIL PROTECTED]
> > > > >wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I am currently experimenting with the BigDecimalColumnInterpreter
> > from
> > > > > https://issues.apache.org/jira/browse/HBASE-6669.
> > > > >
> > > > > I was thinking the best way for me to work with it would be to use
> > the
> > > > Java
> > > > > class and just use that as is.
> > > > >
> > > > > Imported it into my project and tried to work with it as is, by
> just
> > > > > instantiating the ColumnInterpreter as BigDecimalColumnInterpreter.
> > > Okay,
> > > > > threw errors and also complained about not knowing where to find
> > such a
> > > > > class.
> > > > >
> > > > > So I did some reading and found out, that I'd need to have an
> > Endpoint
> > > > for
> > > > > it. So I imported AggregateImplementation and AggregateProtocol
> into
> > my
> > > > > workspace, renamed them, and refactored them where necessary to
> take
> > > > > BigDecimal. Re-exported the jar, then and had another try.
> > > > >
> > > > > So when I call:
> > > > > ------
> > > > > final Scan scan = new Scan((metricID + "," +
> > > basetime_begin).getBytes(),
> > > > > (metricID + "," + basetime_end).getBytes());
> > > > > scan.addFamily(family.getBytes());
> > > > > final ColumnInterpreter<BigDecimal, BigDecimal> ci = new
> > > > > BigDecimalColumnInterpreter();
> > > > > Map<byte[], BigDecimal> results > > > > > table.coprocessorExec(BigDecimalProtocol.class, null, null,
> > > > >     new Batch.Call<BigDecimalProtocol,BigDecimal>() {
> > > > >       public BigDecimal call(BigDecimalProtocol instance)throws
> > > > > IOException{
> > > > >         return instance.getMax(ci, scan);
> > > > >       }
> > > > >     });
> > > > > ------
> > > > > I get errors in the log again, that it can't find
> > > > > BigDecimalColumnInterpreter... okay, so I tried
> > > > > ------
> > > > > Scan scan = new Scan((metricID + "," + basetime_begin).getBytes(),
> > > > > (metricID + "," + basetime_end).getBytes());
> > > > > scan.addFamily(family.getBytes());
> > > > > final ColumnInterpreter<BigDecimal, BigDecimal> ci = new
> > > > > BigDecimalColumnInterpreter();
> > > > > AggregationClient ag = new AggregationClient(config);
> > > > > BigDecimal max = ag.max(Bytes.toBytes(tableName), ci, scan);
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB