Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Premature EOF: no length prefix available


Copy link to this message
-
Re: Premature EOF: no length prefix available
Just to give a feedback to the list, fsck.ext4 reported some size
estimation errors for the fs ... because of that, even if not full (df -h)
files creation failed, and therefore was causing issues to hadoop.

Everything recovered well after that.

JM
Le 2 mai 2013 17:47, "Andrew Purtell" <[EMAIL PROTECTED]> a écrit :

> By "expensive" I mean "seriously?"
>
>
> On Thu, May 2, 2013 at 2:32 PM, Michael Segel <[EMAIL PROTECTED]
> >wrote:
>
> >
> > On May 2, 2013, at 4:18 PM, Andrew Purtell <[EMAIL PROTECTED]> wrote:
> >
> > > Sorry, hit send too soon. I would recommend the following instance
> types:
> > >
> > >    hi1.4xlarge: Expensive but it has a comfortable level of resources
> and
> > > will perform
> >
> > Yeah at a spot price of $3.00 an hour per server?
> > Its expensive and fast. Note that you will want to up the number of slots
> > from the default 2 that are set up. ;-)
> > More tuning is recommended. (Ooops! That's for EMR not just EC2)
> >
> >
> > >    hs1.8xlarge: This is what you might see in a typical data center
> > Hadoop
> > > deployment, also expensive
> > >    m2.2xlarge/m2.4xlarge: Getting up to the amount of RAM you want for
> > > caching in "big data" workloads
> > >    m1.xlarge: Less CPU but more RAM than c1.xlarge, so safer
> > >    c1.xlarge: Only if you really know what you are doing and need to be
> > > cheap
> > >    Anything lesser endowed: Never
> > >
> > > You may find that, relative to AWS charges for a hi1.4xlarge, some
> other
> > > hosting option for the equivalent would be more economical.
> > >
> > >
> > > On Thu, May 2, 2013 at 2:12 PM, Andrew Purtell <[EMAIL PROTECTED]>
> > wrote:
> > >
> > >>> OS is Ubuntu 12.04 and instance type is c1.medium
> > >>
> > >> Eeek!
> > >>
> > >> You shouldn't use less than c1.xlarge for running Hadoop+HBase on
> EC2. A
> > >> c1.medium has only 7 GB of RAM in total.
> > >>
> > >>
> > >> On Thu, May 2, 2013 at 1:53 PM, Loic Talon <[EMAIL PROTECTED]> wrote:
> > >>
> > >>> Hi Andrew,
> > >>> Thanks for those responses.
> > >>>
> > >>> The server has been deployed by Cloudera Manager.
> > >>> OS is Ubuntu 12.04 and instance type is c1.medium.
> > >>> Instance store are used, not EBS.
> > >>>
> > >>> It's possible that this problem is a memory problem ?
> > >>> Because when region server hab been started I have in stdout.log :
> > >>>
> > >>> Thu May  2 17:01:10 UTC 2013
> > >>> using /usr/lib/jvm/j2sdk1.6-oracle as JAVA_HOME
> > >>> using 4 as CDH_VERSION
> > >>> using  as HBASE_HOME
> > >>> using /run/cloudera-scm-agent/process/381-hbase-REGIONSERVER as
> > >>> HBASE_CONF_DIR
> > >>> using /run/cloudera-scm-agent/process/381-hbase-REGIONSERVER as
> > >>> HADOOP_CONF_DIR
> > >>> using  as HADOOP_HOME
> > >>>
> > >>> But when I have the problem, I have in stdout.log :
> > >>> Thu May  2 17:01:10 UTC 2013
> > >>> using /usr/lib/jvm/j2sdk1.6-oracle as JAVA_HOME
> > >>> using 4 as CDH_VERSION
> > >>> using  as HBASE_HOME
> > >>> using /run/cloudera-scm-agent/process/381-hbase-REGIONSERVER as
> > >>> HBASE_CONF_DIR
> > >>> using /run/cloudera-scm-agent/process/381-hbase-REGIONSERVER as
> > >>> HADOOP_CONF_DIR
> > >>> using  as HADOOP_HOME
> > >>> #
> > >>> # java.lang.OutOfMemoryError: Java heap space
> > >>> # -XX:OnOutOfMemoryError="kill -9 %p"
> > >>> #   Executing /bin/sh -c "kill -9 20140"...
> > >>>
> > >>> Thanks
> > >>>
> > >>> Loic
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> Loïc TALON
> > >>>
> > >>>
> > >>> [EMAIL PROTECTED] <http://teads.tv/>
> > >>> Video Ads Solutions
> > >>>
> > >>>
> > >>>
> > >>> 2013/5/2 Andrew Purtell <[EMAIL PROTECTED]>
> > >>>
> > >>>> Every instance type except t1.micro has a certain number of instance
> > >>>> storage (locally attached disk) volumes available, 1, 2, or 4
> > depending
> > >>> on
> > >>>> type.
> > >>>>
> > >>>> As you probably know, you can use or create AMIs backed by
> > >>> instance-store,
> > >>>> in which the OS image is constructed on locally attached disk by a
> > >>> parallel
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB