Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> threads quota is exceeded question


+
rauljin 2013-04-16, 03:24
Copy link to this message
-
Re: threads quota is exceeded question
Hadoop by default limit 5 concurrent threads per node for balancing
purpose. That causes your problem.
On Mon, Apr 15, 2013 at 10:24 PM, rauljin <[EMAIL PROTECTED]> wrote:

> **
>  HI:
>        The hadoop cluster is running balance.
>
>        And one datannode 172.16.80.72 is :
>
> Datanode :Not able to copy block -507744952197054725 to /
> 172.16.80.73:51658 because threads quota is exceeded.
>
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 172.16.80.72:50010, storageID=DS-1202844662-172.16
> .80.72-50010-1330656432004, infoPort=50075, ipcPort=50020):DataXceiver
> java.io.IOException: Block blk_8443528692263789109_8159545 is not valid.
>
>         at org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:734)
>
>         at org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:722)
>
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:92)
>
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:172)
>
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:95)
>         at java.lang.Thread.run(Thread.java:636)
>
>
>        And other datanode:
>
>
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 172.16.80.73:50010
> , storageID=DS-1771394657-172.16.80.73-50010-1362474580654, infoPort=50075, ipcPort=50020):DataXceiver
> java.io.EOFException
>         at java.io.DataInputStream.readByte(DataInputStream.java:267)
>
>         at org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:84)
>
>         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:92)
>
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.replaceBlock(DataXceiver.java:580)
>
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:115)
>         at java.lang.Thread.run(Thread.java:636)
>
>       At that moment , the hdfs is not  avalible.
>
>
> I restart  the 172.16.80.72 datanode service ,and the service is ok.
>
>
>
> what causes the problem.
>
> Any ideas?
>        Thanks!
>
>
>
>
>
>
>
>
>
>
> ------------------------------
> rauljin
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB