Packets are chunks of the input you try to pass to the HDFS writer. What problem are you exactly facing (or, why are you trying to raise up the client's write packet size)? On Mon, Apr 28, 2014 at 8:52 AM, <[EMAIL PROTECTED]> wrote: Harsh J
Hadoop write once a packet, and GZIP compressed file should write completely, so I think if the packet size bigger than the compressed file, I can make sure the compressed file is not written at all or completed written. Is it right ? Thanks a lot.