There is no open file limitation for HDFS. The 'Too many open file' limit is for OS file system. Increase *system-wide maximum number of open files, Per-User/Group/Process file descriptor limits.* On Mon, Jan 27, 2014 at 1:52 AM, Bertrand Dechoux <[EMAIL PROTECTED]>wrote: Regards, ...Sudhakara.st
There is a concurrent connections limit on the DNs that's set to a default of 4k max parallel threaded connections for reading or writing blocks. This is also expandable via configuration but usually the default value suffices even for pretty large operations given the replicas help spread read load around.
Beyond this you will mostly just run into configurable OS limitations. On Jan 26, 2014 11:03 PM, "John Lilley" <[EMAIL PROTECTED]> wrote:
All projects made searchable here are trademarks of the Apache Software Foundation.
Service operated by Sematext