Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Does libhdfs c/c++ api support read/write compressed file


Copy link to this message
-
Re: Does libhdfs c/c++ api support read/write compressed file
Silly question... then what's meant by the native libraries when you talk about compression?

On Jun 3, 2013, at 5:27 AM, Harsh J <[EMAIL PROTECTED]> wrote:

> Hi Xu,
>
> HDFS is data agnostic. It does not currently care about what form the
> data of the files are in - whether they are compressed, encrypted,
> serialized in format-x, etc..
>
> There are hadoop-common APIs that support decompressing of supported
> codecs, but there are no C/C++ level implementations of these (though
> you may use JNI). You will have to write/use your own
> decompress/compress code for files.
>
> On Mon, Jun 3, 2013 at 12:33 PM, Xu Haiti <[EMAIL PROTECTED]> wrote:
>>
>> I have found somebody talks libhdfs does not support read/write gzip file at
>> about 2010.
>>
>> I download the newest hadoop-2.0.4 and read hdfs.h. There is also no
>> compressing arguments.
>>
>> Now I am wondering if it supports reading compressed file now?
>>
>> If it not, how can I make a patch for the libhdfs and make it work?
>>
>> Thanks in advance.
>>
>> Best Regards
>> Haiti
>
>
>
> --
> Harsh J
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB