Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Read Little Endian Input File Format


Copy link to this message
-
Re: Read Little Endian Input File Format
On Mon, Jul 9, 2012 at 1:33 PM, Mike S <[EMAIL PROTECTED]> wrote:

> The input file to my M/R job is a file with binary data (20 mix of
> int, long, float and double per record) which are all saved in little
> endian. I have implement my custom record reader to read a record and
> to do so I am currently using the ByteBuffer to convert every entry in
> the file. I am wondering if there is a more efficient way of doing?
>

I would either make a large ByteBuffer and read into it or use:

// read big endian int
int val = in.readInt();
// flip to little endian
val = ((val & 0xff) << 24) | ((val & 0xff00 << 8) | ((val & 0xff0000) >> 8)
| (val >>> 24);

-- Owen
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB