Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - Problems


Copy link to this message
-
Re: Problems
Prashant Sharma 2013-01-25, 12:02
Can you remove native snappy ? and try!
On Fri, Jan 25, 2013 at 5:24 PM, Sean Hudson <[EMAIL PROTECTED]
> wrote:

>   Hi Ke,
>             We are still looking at possible complications of the VM
> environment. I will post whatever we discover.
>
> Thanks for your interest,
>
> Sean
>
>  *From:* ke yuan <[EMAIL PROTECTED]>
> *Sent:* Friday, January 25, 2013 2:45 AM
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Problems
>
> is there anything done with hardware? i used thinkpad t430,this problem
> occurs,but i used about 100 machines ,there is nothing to do  with this
> ,all the machines is redhat 6.0,and the jdk is jdk1.5 to jdk1.6 , so i
> think there is something to do with the hardware,any idea?
>
> 2013/1/22 Jean-Marc Spaggiari <[EMAIL PROTECTED]>
>
>> Hi Sean,
>>
>> Will you be able to run the memtest86 on this VM? Maybe it's an issue
>> with the way the VM is managing the memory?
>>
>> I ran HBase+Hadoop on a desktop with only 1.5G. So you should not have
>> any issue with 6GB.
>>
>> I don't think the issue you are facing is related to hadoop. Can you
>> try to run a simple Java application in you JVM? Something which will
>> use lot of memory. And see if it works?
>>
>> JM
>>
>> 2013/1/22, Sean Hudson <[EMAIL PROTECTED]>:
>>  > Hi Jean-Marc,
>> >                         The Linux machine on which I am attempting to
>> get
>> > Hadoop running is actually Linux running in a VM partition. This VM
>> > partition had 2 Gigs of RAM when I first encountered the problem. This
>> RAM
>> > allocation has been bumped up to 6 Gigs, but the problem still persists,
>> > i.e
>> >
>> > bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
>> still
>> > crashes out as before.
>> >
>> > Is there a minimum RAM size requirement?
>> > Will Hadoop run correctly on Linux in a VM partition?
>> >
>> >                         I had attempted to run Hadoop in
>> Pseudo-Distributed
>> >
>> > Operation mode and this included modifying the conf/core-site.xml,
>> > conf/hdfs-site.xml and the conf/mapred-site.xml files as per the Quick
>> Start
>> >
>> > instructions. I also formatted a new distributed-filesystem as per the
>> > instructions. To re-test in Standalone mode with 6 Gigs of RAM, I
>> reversed
>> > the changes to the above three .xml files in /conf. However, I don't
>> see a
>> > way to back-out the distributed-filesystem. Will the existence of this
>> > distributed-filesystem interfere with my Standalone tests?
>> >
>> > Regards,
>> >
>> > Sean Hudson
>> >
>> > -----Original Message-----
>> > From: Jean-Marc Spaggiari
>> > Sent: Friday, January 18, 2013 3:24 PM
>> > To: [EMAIL PROTECTED]
>> > Subject: Re: Problems
>> >
>> > Hi Sean,
>> >
>> > It's strange. You should not faced that.  I faced same kind of issues
>> > on a desktop with memory errors. Can you install memtest86 and fullty
>> > test your memory (one pass is enought) to make sure you don't have
>> > issues on that side?
>> >
>> > 2013/1/18, Sean Hudson <[EMAIL PROTECTED]>:
>> >> Leo,
>> >>         I downloaded the suggested 1.6.0_32 Java version to my home
>> >> directory, but I am still experiencing the same problem (See error
>> >> below).
>> >> The only thing that I have set in my hadoop-env.sh file is the
>> JAVA_HOME
>> >> environment variable. I have also tried it with the Java directory
>> added
>> >> to
>> >>
>> >> PATH.
>> >>
>> >> export JAVA_HOME=/home/shu/jre1.6.0_32
>> >> export PATH=$PATH:/home/shu/jre1.6.0_32
>> >>
>> >> Every other environment variable is defaulted.
>> >>
>> >> Just to clarify, I have tried this in Local Standalone mode and also in
>> >> Pseudo-Distributed Mode with the same result.
>> >>
>> >> Frustrating to say the least,
>> >>
>> >> Sean Hudson
>> >>
>> >>
>> >> shu@meath-nua:~/hadoop-1.0.4> bin/hadoop jar hadoop-examples-1.0.4.jar
>> >> grep
>> >>
>> >> input output 'dfs[a-z.]+'
>> >> #
>> >> # A fatal error has been detected by the Java Runtime Environment:
>> >> #
>> >> #  SIGFPE (0x8) at pc=0xb7fc51fb, pid=23112, tid=3075554208

s