Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop, mail # user - store file gives exception


+
AMARNATH, Balachandar 2013-03-06, 09:58
+
AMARNATH, Balachandar 2013-03-06, 11:18
+
AMARNATH, Balachandar 2013-03-06, 12:22
Copy link to this message
-
Re: store file gives exception
Nitin Pawar 2013-03-06, 12:35
in hadoop you don't have to worry about data locality. Hadoop job tracker
will by default try to schedule the job where the data is located in case
it has enough compute capacity. Also note that datanode just store the
blocks of file and multiple datanodes will have different blocks of the
file.
On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
[EMAIL PROTECTED]> wrote:

> Hi all,****
>
> ** **
>
> I thought the below issue is coming because of non availability of enough
> space. Hence, I replaced the datanodes with other nodes with more space and
> it worked. ****
>
> ** **
>
> Now, I have a working HDFS cluster. I am thinking of my application where
> I need to execute ‘a set of similar instructions’  (job) over large number
> of files. I am planning to do this in parallel in different machines. I
> would like to schedule this job to the datanode that already has data input
> file in it. At first, I shall store the files in HDFS.  Now, to complete my
> task, Is there a scheduler available in hadoop framework that given the
> input file required for a job, can return the data node name where the file
> is actually stored?  Am I making sense here?****
>
> ** **
>
> Regards****
>
> Bala ****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* AMARNATH, Balachandar [mailto:[EMAIL PROTECTED]]
> *Sent:* 06 March 2013 16:49
> *To:* [EMAIL PROTECTED]
> *Subject:* RE: store file gives exception****
>
> ** **
>
> Hi, ****
>
> ** **
>
> I could successfully install hadoop cluster with three nodes (2 datanodes
> and 1 namenode). However, when I tried to store a file, I get the following
> error.****
>
> ** **
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null****
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/user/bala/kumki/hosts" - Aborting...****
>
> put: java.io.IOException: File /user/bala/kumki/hosts could only be
> replicated to 0 nodes, instead of 1****
>
> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
> to 0 nodes, instead of 1****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> ****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> ****
>
>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ****
>
>             at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> ****
>
>             at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> ****
>
>             at java.lang.reflect.Method.invoke(Method.java:597)****
>
>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> ****
>
>             at java.security.AccessController.doPrivileged(Native Method)*
> ***
>
>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>             at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)*
> ***
>
> ** **
>
> Any hint to fix this,****
>
> ** **
>
> ** **
>
> Regards****
>
> Bala****
>
> *From:* AMARNATH, Balachandar [mailto:[EMAIL PROTECTED]]
> *Sent:* 06 March 2013 15:29
> *To:* [EMAIL PROTECTED]
> *Subject:* store file gives exception****
>
> ** **
>
> Now I came out of the safe mode through admin command. I tried to put a
> file into hdfs and encountered this error.****
>
>  ****
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
Nitin Pawar