Packets are chunks of the input you try to pass to the HDFS writer. What problem are you exactly facing (or, why are you trying to raise up the client's write packet size)? On Mon, Apr 28, 2014 at 8:52 AM, <[EMAIL PROTECTED]> wrote: Harsh J
Hadoop write once a packet, and GZIP compressed file should write completely, so I think if the packet size bigger than the compressed file, I can make sure the compressed file is not written at all or completed written. Is it right ? Thanks a lot.
You do not need to alter the packet size to write files - why do you think you need larger packets than the default one?
On Mon, Apr 28, 2014 at 4:04 PM, <[EMAIL PROTECTED]> wrote: 16M).
NEW: Monitor These Apps!
Apache Lucene, Apache Solr and all other Apache Software Foundation project and their respective logos are trademarks of the Apache Software Foundation.
Elasticsearch, Kibana, Logstash, and Beats are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. This site and Sematext Group is in no way affiliated with Elasticsearch BV.
Service operated by Sematext