Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # general >> Errors in the log


Copy link to this message
-
Re: Errors in the log
+[EMAIL PROTECTED]
bcc: [EMAIL PROTECTED]

Hey Pete,

The [EMAIL PROTECTED] list is for high-level discussion of the
Apache Hadoop community (usually votes and governance issues.) A question
like this is more appropriate for a *-user list, and since judging by the
version numbers you're using CDH3b3, I've added [EMAIL PROTECTED].

Though I can't comment on the errors you're seeing in the DN log, I do
recognize both errors in your 2NN and NN logs. Those are due to a known bug
in CDH3b3 wherein the 2NN incorrectly determines its own host name during a
checkpoint, and so tells the NN it can be found at 0.0.0.0. (The
"&machine=0.0.0.0" is the giveaway.) This bug will be fixed in the next
release of CDH, but in the mean time the solution is just to configure
the "dfs.secondary.http.address"
to a valid machine name or IP address which will resolve to your 2NN.

--
Aaron T. Myers
Software Engineer, Cloudera

On Mon, Feb 7, 2011 at 10:27 AM, Peter Haidinyak <[EMAIL PROTECTED]>wrote:

> HBase 0.89.20100924+28
> Hadoop 0.20.2+737
>
> During my import process I'm starting to see various warning and errors in
> my Hadoop logs. This just started to happen, the import process has been
> working for awhile. I tried to put some of the errors from the logs on
> various machines here to see if this is a know problem.
>
> Thanks
>
> -Pete
>
> Datanode log
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(172.16.2.224:50010,
> storageID=DS-118625752-172.16.2.224-50010-1294851626750, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(172.16.2.224:50010,
> storageID=DS-118625752-172.16.2.224-50010-1294851626750, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.IOException: Interrupted receiveBlock
>
> WARN org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed
> to getBlockMetaDataInfo for block (=blk_2012842016347254862_70849) from
> datanode (=172.16.2.224
> :50010)
> java.io.IOException: Block blk_2012842016347254862_70849 length is 16906240
> does not match block file length 16971264
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in
> BlockReceiver.run():
> java.io.IOException: Broken pipe
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError:
> exception:
> java.io.IOException: Broken pipe
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in
> BlockReceiver.run():
> java.io.IOException: The stream is closed
>
>
>
> Namenode log
>
> WARN org.mortbay.log: /getimage: java.io.IOException: GetImage failed.
> java.io.IOException: Content-Length header is not provided by the namenode
> when trying to fetch http://0.0.0.0:50090/getimage?getimage=1
>
>
> Secondary name node log
>
> ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception
> in doCheckpoint:
> 2011-02-07 08:51:15,062
> ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> java.io.FileNotFoundException:
> http://caiss01a:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-19:84946961:0:1297097472
> 000:1297097169213
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB