One easy way is to increase the timeout by setting mapred.task.timeout in
On Thu, Jan 28, 2010 at 5:59 PM, #YONG YONG CHENG# <
[EMAIL PROTECTED]> wrote:
> Good Day,
> Is there any way to control the cleanup attempt of a failed map task
> without changing the Hadoop platform? I mean doing it in my MapReduce
> I discovered that FileSystem.copyFromLocal() will take a long time
> sometimes. Is there any other method in the Hadoop API that I can use to
> swiftly transfer my file to the HDFS?
> Situation: Each map task in my job executes very fast under 5 secs. But
> normally, it hangs at the FileSystem.copyFromLocal(), which will take more
> than 55 secs. As machine timeout is 5 secs and task timeout is 1 min, the
> task will fail. And subsequent attempt will also fail at the
> Thanks. I welcome any solutions. Feel free.