Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Premature EOF: no length prefix available


Copy link to this message
-
Re: Premature EOF: no length prefix available
> OS is Ubuntu 12.04 and instance type is c1.medium

Eeek!

You shouldn't use less than c1.xlarge for running Hadoop+HBase on EC2. A
c1.medium has only 7 GB of RAM in total.
On Thu, May 2, 2013 at 1:53 PM, Loic Talon <[EMAIL PROTECTED]> wrote:

> Hi Andrew,
> Thanks for those responses.
>
> The server has been deployed by Cloudera Manager.
> OS is Ubuntu 12.04 and instance type is c1.medium.
> Instance store are used, not EBS.
>
> It's possible that this problem is a memory problem ?
> Because when region server hab been started I have in stdout.log :
>
> Thu May  2 17:01:10 UTC 2013
> using /usr/lib/jvm/j2sdk1.6-oracle as JAVA_HOME
> using 4 as CDH_VERSION
> using  as HBASE_HOME
> using /run/cloudera-scm-agent/process/381-hbase-REGIONSERVER as
> HBASE_CONF_DIR
> using /run/cloudera-scm-agent/process/381-hbase-REGIONSERVER as
> HADOOP_CONF_DIR
> using  as HADOOP_HOME
>
> But when I have the problem, I have in stdout.log :
> Thu May  2 17:01:10 UTC 2013
> using /usr/lib/jvm/j2sdk1.6-oracle as JAVA_HOME
> using 4 as CDH_VERSION
> using  as HBASE_HOME
> using /run/cloudera-scm-agent/process/381-hbase-REGIONSERVER as
> HBASE_CONF_DIR
> using /run/cloudera-scm-agent/process/381-hbase-REGIONSERVER as
> HADOOP_CONF_DIR
> using  as HADOOP_HOME
> #
> # java.lang.OutOfMemoryError: Java heap space
> # -XX:OnOutOfMemoryError="kill -9 %p"
> #   Executing /bin/sh -c "kill -9 20140"...
>
> Thanks
>
> Loic
>
>
>
>
>
>
>  Loïc TALON
>
>
> [EMAIL PROTECTED] <http://teads.tv/>
> Video Ads Solutions
>
>
>
> 2013/5/2 Andrew Purtell <[EMAIL PROTECTED]>
>
> > Every instance type except t1.micro has a certain number of instance
> > storage (locally attached disk) volumes available, 1, 2, or 4 depending
> on
> > type.
> >
> > As you probably know, you can use or create AMIs backed by
> instance-store,
> > in which the OS image is constructed on locally attached disk by a
> parallel
> > fetch process from slices of the root volume image stored in S3, or
> backed
> > by EBS, in which case the OS image is an EBS volume and attached over the
> > network, like a SAN.
> >
> > If you launch an Amazon Linux instance store backed instance the first
> > "ephemeral" local volume will be automatically attached on
> > /media/ephemeral0. That's where that term comes from, it's a synonym for
> > instance-store. (You can by the way tell CloudInit via directives sent
> over
> > instance data to mount all of them.)
> >
> > If you have an EBS backed instance the default is to NOT attach any of
> > these volumes.
> >
> > If you are launching your instance with the Amazon Web console, in the
> > volume configuration part you can set up instance-store aka "ephemeral"
> > mounts whether it is instance-store backed or EBS backed.
> >
> > Sorry I can't get into more background on this. Hope it helps.
> >
> >
> >
> > On Thu, May 2, 2013 at 1:23 PM, Jean-Marc Spaggiari <
> > [EMAIL PROTECTED]
> > > wrote:
> >
> > > Hi Andrew,
> > >
> > > No, this AWS instance is configured with instance stores too.
> > >
> > > What do you mean by "ephemeral"?
> > >
> > > JM
> > >
> > > 2013/5/2 Andrew Purtell <[EMAIL PROTECTED]>
> > >
> > > > Oh, I have faced issues with Hadoop on AWS personally. :-) But not
> this
> > > > one. I use instance-store aka "ephemeral" volumes for DataNode block
> > > > storage. Are you by chance using EBS?
> > > >
> > > >
> > > > On Thu, May 2, 2013 at 1:10 PM, Jean-Marc Spaggiari <
> > > > [EMAIL PROTECTED]
> > > > > wrote:
> > > >
> > > > > But that's wierld. This instance is running on AWS. If there issues
> > > with
> > > > > Hadoop and AWS I think some other people will have faced it before
> > me.
> > > > >
> > > > > Ok. I will move the discussion on the Hadoop mailing list since it
> > > seems
> > > > to
> > > > > be more related to hadoop vs OS.
> > > > >
> > > > > Thank,
> > > > >
> > > > > JM
> > > > >
> > > > > 2013/5/2 Andrew Purtell <[EMAIL PROTECTED]>
> > > > >
> > > > > > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient:

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)