-Re: hdfs unable to create new block with 'Too many open fiiles' exception
sam liu 2013-12-21, 17:25
In this cluster, data nodes runs as user 'mapred'. Actually, all hadoop
daemons runs as user 'mapred'.
2013/12/22 Ted Yu <[EMAIL PROTECTED]>
> Are your data nodes running as user 'hdfs', or 'mapred' ?
> If the former, you need to increase file limit for 'hdfs' user.
> On Sat, Dec 21, 2013 at 8:30 AM, sam liu <[EMAIL PROTECTED]> wrote:
>> Hi Experts,
>> We failed to run an MR job which accesses hive, as hdfs is unable to
>> create new block during reduce phase. The exceptions:
>> 1) In tasklog:
>> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
>> create new block
>> 2) In HDFS data node log:
>> DataXceiveServer: IOException due to:java.io.IOException: Too many open
>> ... ...
>> at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
>> time, we modified /etc/security/limits.conf to increase nofile of mapred
>> user to 1048576. But this issue still happen.
>> Any suggestions?
>> Thanks a lot!