Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> Can not generate a result


+
Astie Darmayantie 2012-08-13, 04:36
Copy link to this message
-
Re: Can not generate a result
Hello Astie,

   Please make sure your datanode is up. I think you have not included
"hadoop.tmp.dir", "dfs.name.dir" and "dfs.data.dir" properties. The value
of these props default to the /tmp dir, which gets emptied on each restart.
As a result you loose all your data and meta information.
Regards,
    Mohammad Tariq

On Mon, Aug 13, 2012 at 10:06 AM, Astie Darmayantie <
[EMAIL PROTECTED]> wrote:

> hi i am new to hadoop.
> i already do the the precautionary measures like : configuring hadoop as
> pseudo-distributed operations, namenode -format etc. before
> running start-all.sh
>
> and i try to execute sample program WordCount by using :
> ./bin/hadoop jar /home/astie/thesis/project_eclipse/WordCount.jar
> WordCount /home/astie/thesis/project_eclipse/input/
> /home/astie/thesis/project_eclipse/output/
>
> it doesn't generate the result and i got this in the log file :
>
> 2012-08-13 11:28:27,053 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for block null bad datanode[0] nodes == null
> 2012-08-13 11:28:27,053 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file "/tmp/mapred/system/jobtracker.info" -
> Aborting...
> 2012-08-13 11:28:27,053 WARN org.apache.hadoop.mapred.JobTracker: Writing
> to file hdfs://localhost:9000/tmp/mapred/system/jobtracker.info failed!
> 2012-08-13 11:28:27,054 WARN org.apache.hadoop.mapred.JobTracker:
> FileSystem is not ready yet!
> 2012-08-13 11:28:27,059 WARN org.apache.hadoop.mapred.JobTracker: Failed
> to initialize recovery manager.
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes,
> instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>         at $Proxy5.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy5.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>
> i am using openSuse and hadoop-1.0.3 also using eclipse to write the
> program.
> it is said that the node was null. yes, i am still running it with my
> computer only. is it the problem?
> can you tell me how to fix this? thank you
>
+
Harsh J 2012-08-13, 14:50
+
Harsh J 2012-08-13, 16:25
+
Harsh J 2012-08-13, 15:39
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB