Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Finishing writing output logs and closing down


+
jing wang 2013-01-08, 12:37
+
Jean-Marc Spaggiari 2013-01-08, 12:40
Copy link to this message
-
Re: Finishing writing output logs and closing down
Hi JM,

sorry for that,hbase version:0.90.6-cdh3u5
Finally got the root cause:
2013-01-08 21:39:27,837 WARN org.apache.hadoop.hdfs.DFSClient: Failed
recovery attempt #0 from primary datanode 192.168.17.104:50010
java.io.IOException: Call to /192.168.17.104:50020 failed on local
exception: java.io.IOException: Couldn't set up IO streams
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1187)
        at org.apache.hadoop.ipc.Client.call(Client.java:1155)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
        at $Proxy11.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
        at
org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:175)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3281)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2792)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2964)
Caused by: java.io.IOException: Couldn't set up IO streams
        at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:634)
        at
org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:212)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1292)
        at org.apache.hadoop.ipc.Client.call(Client.java:1121)
        ... 7 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:640)
        at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:627)
        ... 10 more
2013-01-08 21:39:27,838 WARN org.apache.hadoop.hdfs.DFSClient: Error
Recovery for block blk_-8621396941230567731_1305630 failed  because
recovery from primary datanode 192.168.17.104:50010 failed 1 times.
Pipeline was 192.168.17.109:50010, 192.168.17.108:50010,
192.168.17.104:50010. Will retry...
2013-01-08 21:39:27,862 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:640)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2991)
‘ulimit -a’ shows ‘max user processes’ only 1024
I changed it to 60000, solving my problem.

Regards,
Jing

2013/1/8 Jean-Marc Spaggiari <[EMAIL PROTECTED]>

> Hi Jing,
>
> I'm not sure 0.20.2-cdh3u5 is your HBase version. Can you try to get
> the version from the shell?
>
> JM
>
> 2013/1/8, jing wang <[EMAIL PROTECTED]>:
> > Hi there,
> >
> >    It always show 'Finishing writing output logs and closing down.'What's
> > wrong with our cluster?
> >
> >
> > hadoop version:0.20.2-cdh3u5
> > hbase version:0.20.2-cdh3u5
> >
> > Thanks,
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB