Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Cleanup Attempt in Map Task


Copy link to this message
-
Re: Cleanup Attempt in Map Task
One easy way is to increase the timeout by setting mapred.task.timeout in
mapred-site.xml

On Thu, Jan 28, 2010 at 5:59 PM, #YONG YONG CHENG# <
[EMAIL PROTECTED]> wrote:

> Good Day,
>
> Is there any way to control the cleanup attempt of a failed map task
> without changing the Hadoop platform? I mean doing it in my MapReduce
> application.
>
> I discovered that FileSystem.copyFromLocal() will take a long time
> sometimes. Is there any other method in the Hadoop API that I can use to
> swiftly transfer my file to the HDFS?
>
> Situation: Each map task in my job executes very fast under 5 secs. But
> normally, it hangs at the FileSystem.copyFromLocal(), which will take more
> than 55 secs. As machine timeout is 5 secs and task timeout is 1 min, the
> task will fail. And subsequent attempt will also fail at the
> FileSystem.copyFromLocal().
>
> Thanks. I welcome any solutions. Feel free.
>

--
Best Regards

Jeff Zhang
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB