Given your other thread post, I'd say you may have an inconsistency in
the JDK deployed on the cluster. We generally recommend using the
Oracle JDK6 (for 1.0.4). More details on which version to pick can be
found at http://wiki.apache.org/hadoop/HadoopJavaVersions. In any
case, the JDK installation across the cluster has to be consistent
(you can check via a package tool such as yum, or via "java -version"
commands on each node).
On Mon, Feb 25, 2013 at 7:16 AM, Jean-Marc Spaggiari
<[EMAIL PROTECTED]> wrote:
> Hi Fatih,
> Have you looked in the logs files? Anything there?
> 2013/2/24 Fatih Haltas <[EMAIL PROTECTED]>
>> I am always getting the Child Error, I googled but I could not solve the
>> problem, did anyone encounter with same problem before?
>> [hadoop@ADUAE042-LAP-V conf]$ hadoop jar
>> aggregatewordcount /home/hadoop/project/hadoop-data/NetFlow test1614.out
>> Warning: $HADOOP_HOME is deprecated.
>> 13/02/24 15:53:15 INFO mapred.FileInputFormat: Total input paths to
>> process : 1
>> 13/02/24 15:53:15 INFO mapred.JobClient: Running job:
>> 13/02/24 15:53:16 INFO mapred.JobClient: map 0% reduce 0%
>> 13/02/24 15:53:23 INFO mapred.JobClient: Task Id :
>> attempt_201301141457_0048_m_000002_0, Status : FAILED
>> java.lang.Throwable: Child Error
>> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)