Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Writing doubles into HBase table from Java client


Copy link to this message
-
Re: Writing doubles into HBase table from Java client
HBase itself sees bytes only.  It is up to the application to do the proper
en/decoding.  As to the Pig script, it should be helpful to checkout how
Pig store doubles.

Thanks,
Jimmy
On Tue, Jul 16, 2013 at 8:36 AM, byte array <[EMAIL PROTECTED]> wrote:

> Hello!
>
> A rather rudimentary thing. I have noticed that sometimes HBase shell
> shows double values in a readable format and sometimes as array of (octal
> thus unreadable) 8 bytes .
> This happens when I write them into the table from a java client:
>     eg.
>     org.apache.hadoop.hbase.util.**Bytes.toBytes(1.23);
> In this case I have problems reading/mapping the value in Pig script and
> CDH beeswax/hue.
>
> I made a temporary workaround by writing the doubles using Pig's class:
>     eg.
> org.apache.pig.backend.hadoop.**hbase.HBaseBinaryConverter.**
> toBytes(1.23);
>
> On the contrary, When I aggregate and store doubles from Pig script into
> some other table, they are readable in the HBase shell, as if they were
> strings and also readable by Pig and other programs.
> I wonder what is the proper way to write doubles into HBase table from a
> Java client?
>
> Thanks.
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB