Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> hdfs unable to create new block with 'Too many open fiiles' exception


Copy link to this message
-
hdfs unable to create new block with 'Too many open fiiles' exception
Hi Experts,

We failed to run an MR job which accesses hive, as hdfs is unable to create
new block during reduce phase. The exceptions:
  1) In tasklog:
hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
create new block
  2) In HDFS data node log:
DataXceiveServer: IOException due to:java.io.IOException: Too many open
fiiles
  ... ...
  at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
  at
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)

In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
time, we modified /etc/security/limits.conf to increase nofile of mapred
user to 1048576.  But this issue still happen.

Any suggestions?

Thanks a lot!
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB