Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> common error in map tasks


Copy link to this message
-
Re: common error in map tasks
I'm not aware of any Hadoop-specific meaning for exit code 126.  Typically,
this is a standard Unix exit code used to indicate that a command couldn't
be executed.  Some reasons for this might be that the command is not an
executable file, or the command is an executable file but the user doesn't
have execute permissions.  (See below for an example of each of these.)

Does your job code attempt to exec an external command?  Also, are the task
failures consistently happening on the same set of nodes in your cluster?
 If so, then I recommend checking that the command has been deployed and
has the correct permissions on those nodes.

Even if your code doesn't exec an external command, various parts of the
Hadoop code do this internally, so you still might have a case of a
misconfigured node.

Hope this helps,
--Chris

[chris@Chriss-MacBook-Pro:ttys000] hadoop-common
> ./BUILDING.txt
-bash: ./BUILDING.txt: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] hadoop-common
> echo $?
126

[chris@Chriss-MacBook-Pro:ttys000] test
> ls -lrt exec
-rwx------  1 root  staff     0B Apr 22 14:43 exec*
[chris@Chriss-MacBook-Pro:ttys000] test
> whoami
chris
[chris@Chriss-MacBook-Pro:ttys000] test
> ./exec
bash: ./exec: Permission denied
[chris@Chriss-MacBook-Pro:ttys000] test
> echo $?
126

On Mon, Apr 22, 2013 at 2:09 PM, kaveh minooie <[EMAIL PROTECTED]> wrote:

> thanks. that is the issue, there is no other log files. when i go to the
> attempt directory of that failed map task (e.g. userlogs/job_201304191712_
> **0015/attempt_201304191712_**0015_m_000019_0 ) it is empty. there is no
> other log file. thou based on the counter value, I can say that it happens
> right at the beginning of the map task (counter is only 1 )
>
>
>
>
> On 04/22/2013 02:12 AM, 姚吉龙 wrote:
>
>> Hi
>>
>>
>> I have the same problem before
>> I think this is caused by the lack of memory shortage for map task.
>> It is just a suggestion,you can post your log
>>
>>
>> BRs
>> Geelong
>> —
>> Sent from Mailbox <https://bit.ly/SZvoJe> for iPhone
>>
>>
>>
>> On Mon, Apr 22, 2013 at 4:34 PM, kaveh minooie <[EMAIL PROTECTED]
>> <mailto:[EMAIL PROTECTED]>> wrote:
>>
>>     HI
>>
>>     regardless of what job I run, there are always a few map tasks that
>>     fail with the following, very unhelpful, message: ( that is the
>>     entire error message)
>>
>>     java.lang.Throwable: Child Error
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:271)
>>     Caused by: java.io.IOException: Task process exit with nonzero status
>> of 126.
>>         at org.apache.hadoop.mapred.**TaskRunner.run(TaskRunner.**
>> java:258)
>>
>>
>>     I would appreciate it if someone could show me how I could figure
>>     out why this error keeps happening.
>>
>>     thanks,
>>
>>
>>
> --
> Kaveh Minooie
>