Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> store file gives exception


Copy link to this message
-
RE: store file gives exception
Hi all,

I thought the below issue is coming because of non availability of enough space. Hence, I replaced the datanodes with other nodes with more space and it worked.

Now, I have a working HDFS cluster. I am thinking of my application where I need to execute 'a set of similar instructions'  (job) over large number of files. I am planning to do this in parallel in different machines. I would like to schedule this job to the datanode that already has data input file in it. At first, I shall store the files in HDFS.  Now, to complete my task, Is there a scheduler available in hadoop framework that given the input file required for a job, can return the data node name where the file is actually stored?  Am I making sense here?

Regards
Bala

From: AMARNATH, Balachandar [mailto:[EMAIL PROTECTED]]
Sent: 06 March 2013 16:49
To: [EMAIL PROTECTED]
Subject: RE: store file gives exception

Hi,

I could successfully install hadoop cluster with three nodes (2 datanodes and 1 namenode). However, when I tried to store a file, I get the following error.

13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/bala/kumki/hosts" - Aborting...
put: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/bala/kumki/hosts could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

Any hint to fix this,
Regards
Bala
From: AMARNATH, Balachandar [mailto:[EMAIL PROTECTED]]
Sent: 06 March 2013 15:29
To: [EMAIL PROTECTED]
Subject: store file gives exception

Now I came out of the safe mode through admin command. I tried to put a file into hdfs and encountered this error.

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1

Any hint to fix this,

This happens when the namenode is not datanode. Am I making sense?

With thanks and regards
Balachandar
The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB