Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop, mail # general - Lost tasktracker errors


+
Royston Sellman 2013-01-04, 11:52
+
Robert Evans 2013-01-04, 14:34
+
Royston Sellman 2013-01-04, 15:02
Copy link to this message
-
Re: Lost tasktracker errors
Robert Evans 2013-01-04, 15:16
This really should be on the user list so I am moving it over there.

It is probably something about the OS that is killing it.  The only thing
that I know of on stock Linux that would do this is the Out of Memory
Killer.  Do you have swap enabled on these boxes?  You should check the
OOM killer logs, and if that is the case reset the box.

--Bobby

On 1/4/13 9:02 AM, "Royston Sellman" <[EMAIL PROTECTED]>
wrote:

>Hi Bobby,
>
>Thanks for the response Bobby,
>
>The tasktracker logs such as "hadoop-hdfs-tasktracker-hd-37-03.log"
>contained the log messages included in our previous message. It seems to
>show a series of successful map attempts with a few reduce attempts
>interspersed, then it gets to a point and only shows a series of reduce
>attempts that appear to be stuck at the same level of progress, before
>outputting the 143 exit code and the interrupted sleep message at the end.
>
>There is nothing in the tasktracker~.out files...
>
>The machines did not go down but the affected TTs did not log anything
>till I got up in the morning, saw the job had frozen, did stop-all.sh.
>Then the stalled TTs logged the shutdown.
>
>The disks are not full (67% usage across 12 disks per worker).
>
>It seems that the 143 exit code indicates that an external process has
>terminated our tasktracker JVM. Is this correct?
>
>If so, what would the likely suspects be that would terminate our
>tasktrackers? Is it possible this is related to our operating system
>(Scientific Linux 6) and PAM limits?
>
>We had already increased our hard limit on the number of open files for
>the "hdfs" user (that launches hdfs and mapred daemons) to 32768 in the
>hope that this would solve the issue. Can you see anything wrong with our
>security limits:
>
>[hdfs@hd-37-03 hdfs]$ ulimit -aH
>core file size          (blocks, -c) 0
>data seg size           (kbytes, -d) unlimited
>scheduling priority             (-e) 0
>file size               (blocks, -f) unlimited
>pending signals                 (-i) 191988
>max locked memory       (kbytes, -l) 64
>max memory size         (kbytes, -m) unlimited
>open files                      (-n) 32768
>pipe size            (512 bytes, -p) 8
>POSIX message queues     (bytes, -q) 819200
>real-time priority              (-r) 0
>stack size              (kbytes, -s) unlimited
>cpu time               (seconds, -t) unlimited
>max user processes              (-u) unlimited
>virtual memory          (kbytes, -v) unlimited
>file locks                      (-x) unlimited
>
>Thanks for your help.
>
>Royston
>
>On 4 Jan 2013, at 14:34, Robert Evans <[EMAIL PROTECTED]> wrote:
>
>> Is there anything in the task tracker's logs?  Did the machines go down?
>> Are there full disks on those nodes?
>>
>> --Bobby
>>
>> On 1/4/13 5:52 AM, "Royston Sellman" <[EMAIL PROTECTED]>
>> wrote:
>>
>>> I'm running a job over a 380 billion row 20 TB dataset which is
>>>computing
>>> sum(), max() etc. The job is running fine at around 3 million rows per
>>> second for several hours then grinding to a halt as it loses one after
>>> another of the tasktrackers.  We see a healthy mix of successful map
>>>and
>>> reduce attempts on the tasktracker...
>>>
>>>
>>>
>>> 2013-01-03 23:41:40,249 INFO org.apache.hadoop.mapred.TaskTracker:
>>> attempt_201301031813_0001_m_041109_0 1.0%
>>>
>>> 2013-01-03 23:41:40,256 INFO org.apache.hadoop.mapred.TaskTracker:
>>> attempt_201301031813_0001_m_041105_0 1.0%
>>>
>>> 2013-01-03 23:41:40,260 INFO org.apache.hadoop.mapred.TaskTracker:
>>> attempt_201301031813_0001_m_041105_0 1.0%
>>>
>>> 2013-01-03 23:41:40,261 INFO org.apache.hadoop.mapred.TaskTracker: Task
>>> attempt_201301031813_0001_m_041105_0 is done.
>>>
>>> 2013-01-03 23:41:40,261 INFO org.apache.hadoop.mapred.TaskTracker:
>>> reported
>>> output size for attempt_201301031813_0001_m_041105_0  was 111
>>>
>>> 2013-01-03 23:41:40,261 INFO org.apache.hadoop.mapred.TaskTracker:
>>> addFreeSlot : current free slots : 8
>>>
>>> 2013-01-03 23:41:40,374 INFO org.apache.hadoop.mapred.TaskTracker:
+
Royston Sellman 2013-01-04, 18:04
+
Jeff Bean 2013-01-07, 21:03