Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Error writing file (Invalid argument)


Copy link to this message
-
Error writing file (Invalid argument)
Hi,

I'm facing the issue below with Hadoop.

Configuration:
- 1 WAS node;
- Replication factore setup to 1;
- Short Circuit activated.

Exception:
2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.server.
datanode.DataNode: opWriteBlock
BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
received exception java.io.FileNotFoundException:
/mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
(Invalid argument)
2013-05-02 14:02:41,063 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver error
processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest: /
10.238.38.193:50010
java.io.FileNotFoundException:
/mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
(Invalid argument)
        at java.io.RandomAccessFile.open(Native Method)
        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
        at
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.createStreams(ReplicaInPipeline.java:187)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:199)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:457)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
        at java.lang.Thread.run(Thread.java:662)

I tried to touch the file and I'm able to create it with the HDFS user. I'm
also able to delete it.

It's something recurring. I mean, if I fixe my files, an clean everything,
hadoop will be consistent. Then if I start to write back into hadoop,
quickly I will get this issue again.

Any idea what I should look at?

hbase@ip-10-238-38-193:/mnt/log/hadoop-hdfs$ hadoop version
Hadoop 2.0.0-cdh4.2.0
Subversion
file:///var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.2.0-Packaging-Hadoop-2013-02-15_10-38-54/hadoop-2.0.0+922-1.cdh4.2.0.p0.12~precise/src/hadoop-common-project/hadoop-common
-r 8bce4bd28a464e0a92950c50ba01a9deb1d85686
Compiled by jenkins on Fri Feb 15 11:13:37 PST 2013
>From source with checksum 3eefc211a14ac7b6e764d6ded2eeeb26

Thanks,

JM
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB