Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> Re: Issue on running examples


Copy link to this message
-
Re: Issue on running examples
Figured out the issue. It is due to incorrectly using  parameters  in "-Dmapreduce.job.user.name=$USER -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912 -Ddfs.block.size=536870912 -libjars ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.24.0-SNAPSHOT.jar "
 
If I cancel them , then the example run , but still got some exceptions thrown in the result, for example
 
"
2012-02-07 20:05:25,202 WARN  mapreduce.Job (Job.java:getTaskLogs(1460)) - Error reading task output Server returned HTTP response code: 400 for URL: http://localhost:8080/tasklog?plaintext=true&attemptid=attempt_1328672895560_0002_m_000003_1&filter=stdout

The link above displayed page with "Required param job, map and reduce"

I am going to check  more of them.

Hai
 
----- Original Message -----
From: Hai Huang <[EMAIL PROTECTED]>
To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
Cc:
Sent: Sunday, February 5, 2012 9:52:21 PM
Subject: Re: Issue on running examples

I am doing following steps to running a example -- randomwriter
 
1.     sbin/hadoop-daemon.sh start namenode-Dmapreduce.job.user.name=$USER -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912 -Ddfs.block.size=536870912 -libjars ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.24.0-SNAPSHOT.jar 
 
2.     sbin/hadoop-daemon.sh start datanode
 
3.    bin/yarn-daemon.sh start resourcemanager
 
4.    bin/yarn-daemon.sh start nodemanager
 
5.  ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-0.24.0-SNAPSHOT.jar randomwriter output
 
The step reported error message in below:
 
=================================================================== 
2012-02-05 18:44:21,905 WARN  conf.Configuration (Configuration.java:set(639)) - mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
Running 10 maps.
Job started: Sun Feb 05 18:44:22 PST 2012
2012-02-05 18:44:22,512 WARN  conf.Configuration (Configuration.java:handleDeprecation(326)) - fs.default.name is deprecated. Instead, use fs.defaultFS
2012-02-05 18:44:22,618 WARN  hdfs.DFSClient (DFSOutputStream.java:run(549)) - DataStreamer Exception
java.io.IOException: java.io.IOException: File /tmp/hadoop-yarn/staging/hai/.staging/job_1328468416955_0007/libjars/hadoop-mapreduce-client-jobclient-0.24.0-SNAPSHOT.jar could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1145)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1540)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:477)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:346)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42602)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:439)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:862)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1608)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1604)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:416)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1602)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:203)
        at $Proxy10.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:127)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:81)
        at $Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:355)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1097)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:973)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)
2012-02-05 18:44:22,620 INFO  mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(388)) - Cleaning up the staging area /tmp/hadoop-yarn/staging/hai/.staging/job_1328468416955_0007
2012-02-05 18:44:22,626 ERROR security.UserGroupInformation (UserGroupInformation.java:doAs(1180)) - PriviledgedActionException as:hai (auth:SIMPLE) cause:java.io.IOException: java.io.IOException: File /tmp/hadoop-yarn/staging/hai/.staging/job_1328468416955_0007/libjars/hadoop-mapreduce-client-jobclient-0.24.0-SNAPSHOT.jar could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1145)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1540)
        at org.apache.hadoop.hdfs.server.namenode.N
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB