Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: "java.lang.Throwable: Child Error " And " Task process exit with nonzero status of 1."


Copy link to this message
-
Re: "java.lang.Throwable: Child Error " And " Task process exit with nonzero status of 1."
We have seen that limit reached when we ran a large number of jobs (3000
strikes me as the figure but the real number may be hire) It has to do with
the number of files created in a 24 hour period, My colleague who had the
issue created a chron job to clear out the logs every hour.

How many jobs are you running in a 24 hour period?

On Mon, Jul 11, 2011 at 1:17 AM, Sudharsan Sampath <[EMAIL PROTECTED]>wrote:

> Hi,
>
> The issue could be attributed to many causes. Few of which are
>
> 1) Unable to create logs due to insufficient space in the logs directory,
> permissions issue.
> 2) ulimit threshold that causes insuffucient allocation of memory.
> 3) OOM on the child or unable to allocate the configured memory while
> spawning the child
> 4) Bug in the child args configuration in the mapred-site
> 5) Unable to write the temp outputs (due to space or permission issue)
>
> The log that u mentioned in a limit on the file system spec and usually
> occurs in a complex environment. Highly rare it could be the issue in
> running the wordcount example.
>
> Thanks
> Sudhan S
>
>
> On Mon, Jul 11, 2011 at 11:50 AM, Devaraj Das <[EMAIL PROTECTED]>wrote:
>
>> Moving this to mapreduce-user (this is the right list)..
>>
>> Could you please look at the TaskTracker logs around the time when you see
>> the task failure. That might have something more useful for debugging..
>>
>>
>> On Jul 10, 2011, at 8:14 PM, Michael Hu wrote:
>>
>> > Hi,all,
>> >    The hadoop is set up. Whenever I run a job, I always got the same
>> error.
>> > Error is:
>> >
>> >    micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
>> > hadoop-mapred-examples-0.21.0.jar wordcount test testout
>> >
>> > *11/07/11 10:48:59 INFO mapreduce.Job: Running job:
>> job_201107111031_0003
>> > 11/07/11 10:49:00 INFO mapreduce.Job:  map 0% reduce 0%
>> > 11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
>> > attempt_201107111031_0003_m_000002_0, Status : FAILED
>> > java.lang.Throwable: Child Error
>> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
>> > Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>> >        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)
>> >
>> > 11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
>> >
>> outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
>> > 11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
>> >
>> outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
>> > *
>> >
>> >    I google the " Task process exit with nonzero status of 1." They say
>> > 'it's an OS limit on the number of sub-directories that can be related
>> in
>> > another directory.' But I can create any sub-directories related in
>> another
>> > directory.
>> >
>> >    Please, could anybody help me to solve this problem? Thanks
>> > --
>> > Yours sincerely
>> > Hu Shengqiu
>>
>>
>
--
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB