Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Re: Problems


+
Jean-Marc Spaggiari 2013-01-17, 14:59
Hi Ke,
            We are still looking at possible complications of the VM environment. I will post whatever we discover.

Thanks for your interest,

Sean

From: ke yuan
Sent: Friday, January 25, 2013 2:45 AM
To: [EMAIL PROTECTED]
Subject: Re: Problems

is there anything done with hardware? i used thinkpad t430,this problem occurs,but i used about 100 machines ,there is nothing to do  with this ,all the machines is redhat 6.0,and the jdk is jdk1.5 to jdk1.6 , so i think there is something to do with the hardware,any idea?
2013/1/22 Jean-Marc Spaggiari <[EMAIL PROTECTED]>

  Hi Sean,

  Will you be able to run the memtest86 on this VM? Maybe it's an issue
  with the way the VM is managing the memory?

  I ran HBase+Hadoop on a desktop with only 1.5G. So you should not have
  any issue with 6GB.

  I don't think the issue you are facing is related to hadoop. Can you
  try to run a simple Java application in you JVM? Something which will
  use lot of memory. And see if it works?

  JM

  2013/1/22, Sean Hudson <[EMAIL PROTECTED]>:

  > Hi Jean-Marc,
  >                         The Linux machine on which I am attempting to get
  > Hadoop running is actually Linux running in a VM partition. This VM
  > partition had 2 Gigs of RAM when I first encountered the problem. This RAM
  > allocation has been bumped up to 6 Gigs, but the problem still persists,
  > i.e
  >
  > bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'  still
  > crashes out as before.
  >
  > Is there a minimum RAM size requirement?
  > Will Hadoop run correctly on Linux in a VM partition?
  >
  >                         I had attempted to run Hadoop in Pseudo-Distributed
  >
  > Operation mode and this included modifying the conf/core-site.xml,
  > conf/hdfs-site.xml and the conf/mapred-site.xml files as per the Quick Start
  >
  > instructions. I also formatted a new distributed-filesystem as per the
  > instructions. To re-test in Standalone mode with 6 Gigs of RAM, I reversed
  > the changes to the above three .xml files in /conf. However, I don't see a
  > way to back-out the distributed-filesystem. Will the existence of this
  > distributed-filesystem interfere with my Standalone tests?
  >
  > Regards,
  >
  > Sean Hudson
  >
  > -----Original Message-----
  > From: Jean-Marc Spaggiari
  > Sent: Friday, January 18, 2013 3:24 PM
  > To: [EMAIL PROTECTED]
  > Subject: Re: Problems
  >
  > Hi Sean,
  >
  > It's strange. You should not faced that.  I faced same kind of issues
  > on a desktop with memory errors. Can you install memtest86 and fullty
  > test your memory (one pass is enought) to make sure you don't have
  > issues on that side?
  >
  > 2013/1/18, Sean Hudson <[EMAIL PROTECTED]>:
  >> Leo,
  >>         I downloaded the suggested 1.6.0_32 Java version to my home
  >> directory, but I am still experiencing the same problem (See error
  >> below).
  >> The only thing that I have set in my hadoop-env.sh file is the JAVA_HOME
  >> environment variable. I have also tried it with the Java directory added
  >> to
  >>
  >> PATH.
  >>
  >> export JAVA_HOME=/home/shu/jre1.6.0_32
  >> export PATH=$PATH:/home/shu/jre1.6.0_32
  >>
  >> Every other environment variable is defaulted.
  >>
  >> Just to clarify, I have tried this in Local Standalone mode and also in
  >> Pseudo-Distributed Mode with the same result.
  >>
  >> Frustrating to say the least,
  >>
  >> Sean Hudson
  >>
  >>
  >> shu@meath-nua:~/hadoop-1.0.4> bin/hadoop jar hadoop-examples-1.0.4.jar
  >> grep
  >>
  >> input output 'dfs[a-z.]+'
  >> #
  >> # A fatal error has been detected by the Java Runtime Environment:
  >> #
  >> #  SIGFPE (0x8) at pc=0xb7fc51fb, pid=23112, tid=3075554208
  >> #
  >> # JRE version: 6.0_32-b05
  >> # Java VM: Java HotSpot(TM) Client VM (20.7-b02 mixed mode, sharing
  >> linux-x86 )
  >> # Problematic frame:
  >> # C  [ld-linux.so.2+0x91fb]  double+0xab
  >> #
  >> # An error report file with more information is saved as:
  >> # /home/shu/hadoop-1.0.4/hs_err_pid23112.log
  >> #
  >> # If you would like to submit a bug report, please visit:
  >> #   http://java.sun.com/webapps/bugreport/crash.jsp
  >> # The crash happened outside the Java Virtual Machine in native code.
  >> # See problematic frame for where to report the bug.
  >> #
  >> Aborted
  >>
  >> -----Original Message-----
  >> From: Leo Leung
  >> Sent: Thursday, January 17, 2013 6:46 PM
  >> To: [EMAIL PROTECTED]
  >> Subject: RE: Problems
  >>
  >> Use Sun/Oracle  1.6.0_32+   Build should be 20.7-b02+
  >>
  >> 1.7 causes failure and AFAIK,  not supported,  but you are free to try
  >> the
  >> latest version and report back.
  >>
  >>
  >>
  >> -----Original Message-----
  >> From: Sean Hudson [mailto:[EMAIL PROTECTED]]
  >> Sent: Thursday, January 17, 2013 6:57 AM
  >> To: [EMAIL PROTECTED]
  >> Subject: Re: Problems
  >>
  >> Hi,
  >>       My Java version is
  >>
  >> java version "1.6.0_25"
  >> Java(TM) SE Runtime Environment (build 1.6.0_25-b06) Java HotSpot(TM)
  >> Client
  >>
  >> VM (build 20.0-b11, mixed mode, sharing)
  >>
  >> Would you advise obtaining a later Java version?
  >>
  >> Sean
  >>
  >> -----Original Message-----
  >> From: Jean-Marc Spaggiari
  >> Sent: Thursday, January 17, 2013 2:52 PM
  >> To: [EMAIL PROTECTED]
  >> Subject: Re: Problems
  >>
  >> Hi Sean,
  >>
  >> This is an issue with your JVM. Not related to hadoop.
  >>
  >> Which JVM are you using, and can you try with the last from Sun?
  >>
  >> JM
  >>
  >> 2013/1/17, Sean Hudson <[EMAIL PROTECTED]>:
  >>> Hi,
  >>>       I have recently installed hadoop-1.0.4 on a linux machine.
  >>> Whilst working through the post-install instructions contained in the
  >>> “Quick Start”
  >>> guide, I incurred the following catastrophic Java runtime error (See
  >>> below).
  >>> I have attached the error report file “hs_err_pid24928.log”. I have
  >>> submitted a Java bug report, but per
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB