Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> RE: Yarn HDFS and Yarn Exceptions when processing "larger" datasets.


Copy link to this message
-
Re: Yarn HDFS and Yarn Exceptions when processing "larger" datasets.
Hi

Just a quick short reply (tomorrow is my prototype presentation).

@Omkar Joshi
- RM port 8030 already running when I start my AM
- I'll do the client thread size AM
- Only AM communicates with RM
- RM/NM no exceptions there (as far as I remember will check later [sorry])

Furthermore in fully distributed mode AM doesn't throw exceptions anymore,
only Containers.

@John Lilley
Yes the problem is with my code (I don't want to imply that it is YARN's
problem). I have successfully run Distributed Shell and YARN's MapReduce
jobs with much bigger datasets than 1mb ;). I just don't know where to
start looking for the problem, especially for the Containers exceptions as
they occur after my containers are "done" with HDFS (until they store final
output).

The only "idea" I have is that these exceptions occur during Containers
communication. Instead of sending multiple messages my containers aggregate
all messages per container into one "big" message (the biggest around
8k-10k chars), thus each container sends only 1 message to other container
(which includes multiple messages). I don't know if this information is
important, but I am planning to see what will happen if I partition the
messages (1024). I got this "idea" from the Containers exception "
org.apache.hadoop.hdfs.SocketCache", I am using SocketChannels to send
these "big" messages, so maybe I am creating some Socket "conflict" .

regards
tmp

2013/7/2 John Lilley <[EMAIL PROTECTED]>

>  Blah blah,****
>
> Can you build and run the DistributedShell example?  If it does not run
> correctly this would tend to implicate your configuration.  If it run
> correctly then your code is suspect.****
>
> John****
>
> ** **
>
> ** **
>
> *From:* blah blah [mailto:[EMAIL PROTECTED]]
> *Sent:* Tuesday, June 25, 2013 6:09 PM
>
> *To:* [EMAIL PROTECTED]
> *Subject:* Yarn HDFS and Yarn Exceptions when processing "larger"
> datasets.****
>
> ** **
>
> Hi All****
>
> First let me excuse for the poor thread title but I have no idea how to
> express the problem in one sentence. ****
>
> I have implemented new Application Master with the use of Yarn. I am using
> old Yarn development version. Revision 1437315, from 2013-01-23 (SNAPSHOT
> 3.0.0). I can not update to current trunk version, as prototype deadline is
> soon, and I don't have time to include Yarn API changes.****
>
> Currently I execute experiments in pseudo-distributed mode, I use guava
> version 14.0-rc1. I have a problem with Yarn's and HDFS Exceptions for
> "larger" datasets. My AM works fine and I can execute it without a problem
> for a debug dataset (1MB size). But when I increase the size of input to
> 6.8 MB, I am getting the following exceptions:****
>
> AM_Exceptions_Stack
>
> Exception in thread "Thread-3"
> java.lang.reflect.UndeclaredThrowableException
>     at
> org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:135)
>     at
> org.apache.hadoop.yarn.api.impl.pb.client.AMRMProtocolPBClientImpl.allocate(AMRMProtocolPBClientImpl.java:77)
>     at
> org.apache.hadoop.yarn.client.AMRMClientImpl.allocate(AMRMClientImpl.java:194)
>     at
> org.tudelft.ludograph.app.AppMasterContainerRequester.sendContainerAskToRM(AppMasterContainerRequester.java:219)
>     at
> org.tudelft.ludograph.app.AppMasterContainerRequester.run(AppMasterContainerRequester.java:315)
>     at java.lang.Thread.run(Thread.java:662)
> Caused by: com.google.protobuf.ServiceException: java.io.IOException:
> Failed on local exception: java.io.IOException: Response is null.; Host
> Details : local host is: "linux-ljc5.site/127.0.0.1"; destination host
> is: "0.0.0.0":8030;
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:212)
>     at $Proxy10.allocate(Unknown Source)
>     at
> org.apache.hadoop.yarn.api.impl.pb.client.AMRMProtocolPBClientImpl.allocate(AMRMProtocolPBClientImpl.java:75)
>     ... 4 more
> Caused by: java.io.IOException
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB