Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - RE: Error for Pseudo-distributed Mode


Copy link to this message
-
RE: Error for Pseudo-distributed Mode
Vijay Thakorlal 2013-02-12, 14:34
Hi,

 

Could you first try running the example:

$ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
grep input output 'dfs[a-z.]+'

 

Do you receive the same error?

 

Not sure if it's related to a lack of RAM, but as the stack trace shows
errors with "network" timeout (I realise that you're running in
pseudo-distributed mode):

 

Caused by: com.google.protobuf.ServiceException:
java.net.SocketTimeoutException: Call From localhost.localdomain/127.0.0.1
to localhost.localdomain:54113 failed on socket timeout exception:
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/127.0.0.1:60976 remote=localhost.localdomain/127.0.0.1:54113]; For
more details see:  http://wiki.apache.org/hadoop/SocketTimeout
 

Your best bet is probably to start with checking the items mentioned in the
wiki page linked to above. While the default firewall rules (on CentOS)
usually allows pretty much all traffic on the lo interface it might be worth
temporarily turning off iptables (assuming it is on).

 

Vijay

 

 

 

From: yeyu1899 [mailto:[EMAIL PROTECTED]]
Sent: 12 February 2013 12:58
To: [EMAIL PROTECTED]
Subject: Error for Pseudo-distributed Mode

 

Hi all,

I installed a redhat_enterprise-linux-x86 in VMware Workstation, and set the
virtual machine 1G memory.

 

Then I followed steps guided by "Installing CDH4 on a Single Linux Node in
Pseudo-distributed Mode" --
https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+N
ode+in+Pseudo-distributed+Mode.

 

When at last, I ran an example Hadoop job with the command "$ hadoop jar
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep input output23
'dfs[a-z.]+'"

 

then the screen showed as follows,

depending "AttemptID:attempt_1360528029309_0001_r_000000_0 Timed out after
600 secs" and I wonder is that because my virtual machine's memory too
little~~??

 

[hadoop@localhost hadoop-mapreduce]$ hadoop jar
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep input output23
'dfs[a-z]+'
13/02/11 04:30:44 WARN mapreduce.JobSubmitter: No job jar file set.  User
classes may not be found. See Job or Job#setJar(String).
13/02/11 04:30:44 INFO input.FileInputFormat: Total input paths to process :
4                                  

13/02/11 04:30:45 INFO mapreduce.JobSubmitter: number of splits:4
13/02/11 04:30:45 WARN conf.Configuration: mapred.output.value.class is
deprecated. Instead, use mapreduce.job.output.value.class
13/02/11 04:30:45 WARN conf.Configuration: mapreduce.combine.class is
deprecated. Instead, use mapreduce.job.combine.class
13/02/11 04:30:45 WARN conf.Configuration: mapreduce.map.class is
deprecated. Instead, use mapreduce.job.map.class
13/02/11 04:30:45 WARN conf.Configuration: mapred.job.name is deprecated.
Instead, use mapreduce.job.name        

13/02/11 04:30:45 WARN conf.Configuration: mapreduce.reduce.class is
deprecated. Instead, use mapreduce.job.reduce.class
13/02/11 04:30:45 WARN conf.Configuration: mapred.input.dir is deprecated.
Instead, use mapreduce.input.fileinputformat.inputdir
13/02/11 04:30:45 WARN conf.Configuration: mapred.output.dir is deprecated.
Instead, use mapreduce.output.fileoutputformat.outputdir
13/02/11 04:30:45 WARN conf.Configuration: mapreduce.outputformat.class is
deprecated. Instead, use mapreduce.job.outputformat.class
13/02/11 04:30:45 WARN conf.Configuration: mapred.map.tasks is deprecated.
Instead, use mapreduce.job.maps      

13/02/11 04:30:45 WARN conf.Configuration: mapred.output.key.class is
deprecated. Instead, use mapreduce.job.output.key.class
13/02/11 04:30:45 WARN conf.Configuration: mapred.working.dir is deprecated.
Instead, use mapreduce.job.working.dir
13/02/11 04:30:46 INFO mapred.YARNRunner: Job jar is not present. Not adding
any jar to the list of resources.  

13/02/11 04:30:46 INFO mapred.ResourceMgrDelegate: Submitted application
application_1360528029309_0001 to ResourceManager at /0.0.0.0:8032
13/02/11 04:30:46 INFO mapreduce.Job: The url to track the job:
http://localhost.localdomain:8088/proxy/application_1360528029309_0001/
13/02/11 04:30:46 INFO mapreduce.Job: Running job: job_1360528029309_0001
13/02/11 04:31:01 INFO mapreduce.Job: Job job_1360528029309_0001 running in
uber mode : false                    

13/02/11 04:31:01 INFO mapreduce.Job:  map 0% reduce 0%
13/02/11 04:47:22 INFO mapreduce.Job: Task Id :
attempt_1360528029309_0001_r_000000_0, Status : FAILED          

AttemptID:attempt_1360528029309_0001_r_000000_0 Timed out after 600 secs
cleanup failed for container container_1360528029309_0001_01_000006 :
java.lang.reflect.UndeclaredThrowableException
        at
org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAn
dThrowException(YarnRemoteExceptionPBImpl.java:135)
        at
org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.stopC
ontainer(ContainerManagerPBClientImpl.java:114)
        at
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.
kill(ContainerLauncherImpl.java:209)
        at
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProce
ssor.run(ContainerLauncherImpl.java:394)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
10)                      

        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
03)                      

        at java.lang.Thread.run(Thread.java:722)
Caused by: com.google.protobuf.ServiceException:
java.net.SocketTimeoutException: Call From localhost.localdomain/127.0.0.1
to localhost.localdomain:54113 failed on socket timeout exception:
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/127.0.0.1:60976 remote=localhost.localdomain/127.0.0.1:54113]; For