Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Why my tests shows Yarn is worse than MRv1 for terasort?


Copy link to this message
-
Re: Why my tests shows Yarn is worse than MRv1 for terasort?
How many map and reduce slots are you using per tasktracker in MR1?  How do
the average map times compare? (MR2 reports this directly on the web UI,
but you can also get a sense in MR1 by scrolling through the map tasks
page).  Can you share the counters for MR1?

-Sandy
On Wed, Oct 23, 2013 at 12:23 AM, Jian Fang
<[EMAIL PROTECTED]>wrote:

> Unfortunately, turning off JVM reuse still got the same result, i.e.,
> about 90 minutes for MR2. I don't think the killed reduces could contribute
> to 2 times slowness. There should be something very wrong either in
> configuration or code. Any hints?
>
>
>
> On Tue, Oct 22, 2013 at 5:50 PM, Jian Fang <[EMAIL PROTECTED]>wrote:
>
>> Thanks Sandy. I will try to turn JVM resue off and see what happens.
>>
>> Yes, I saw quite some exceptions in the task attempts. For instance.
>>
>>
>> 2013-10-20 03:13:58,751 ERROR [main]
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:hadoop (auth:SIMPLE) cause:java.nio.channels.ClosedChannelException
>> 2013-10-20 03:13:58,752 ERROR [Thread-6]
>> org.apache.hadoop.hdfs.DFSClient: Failed to close file
>> /1-tb-data/_temporary/1/_temporary/attempt_1382237301855_0001_m_000200_1/part-m-00200
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>> No lease on
>> /1-tb-data/_temporary/1/_temporary/attempt_1382237301855_0001_m_000200_1/part-m-00200:
>> File does not exist. Holder
>> DFSClient_attempt_1382237301855_0001_m_000200_1_872378586_1 does not have
>> any open files.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2737)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2801)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2783)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:611)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:429)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48077)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:582)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
>> --
>>         at com.sun.proxy.$Proxy10.complete(Unknown Source)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:371)
>>         at
>> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:1910)
>>         at
>> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:1896)
>>         at
>> org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:773)
>>         at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:790)
>>         at
>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
>>         at
>> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2526)
>>         at
>> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2551)
>>         at
>> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
>> 2013-10-20 03:13:58,753 WARN [main] org.apache.hadoop.mapred.YarnChild:
>> Exception running child : java.nio.channels.ClosedChannelException
>>         at
>> org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1325)
>>         at
>> org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:98)
>>         at
>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:61)