Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: hdfs unable to create new block with 'Too many open fiiles' exception


Copy link to this message
-
Re: hdfs unable to create new block with 'Too many open fiiles' exception
In this cluster, data nodes runs as user 'mapred'. Actually, all hadoop
daemons runs as user 'mapred'.
2013/12/22 Ted Yu <[EMAIL PROTECTED]>

> Are your data nodes running as user 'hdfs', or 'mapred' ?
>
> If the former, you need to increase file limit for 'hdfs' user.
>
> Cheers
>
>
> On Sat, Dec 21, 2013 at 8:30 AM, sam liu <[EMAIL PROTECTED]> wrote:
>
>> Hi Experts,
>>
>> We failed to run an MR job which accesses hive, as hdfs is unable to
>> create new block during reduce phase. The exceptions:
>>   1) In tasklog:
>> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
>> create new block
>>   2) In HDFS data node log:
>> DataXceiveServer: IOException due to:java.io.IOException: Too many open
>> fiiles
>>   ... ...
>>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>>   at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>>
>> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
>> time, we modified /etc/security/limits.conf to increase nofile of mapred
>> user to 1048576.  But this issue still happen.
>>
>> Any suggestions?
>>
>> Thanks a lot!
>>
>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB