Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Why HBase reduce the size of data when writing to the disk


Copy link to this message
-
Re: Why HBase reduce the size of data when writing to the disk
Also, can you please share your table description?

Thanks,

JM

Le dimanche 13 octobre 2013, Ted Yu a écrit :

> Have you run rowcounter and seen 50,000 being reported ?
>
> Cheers
>
>
> On Sun, Oct 13, 2013 at 9:02 AM, Farrokh Shahriari <
> [EMAIL PROTECTED] <javascript:;>> wrote:
>
> > No, it's by default value
> >
> >
> > On Sun, Oct 13, 2013 at 2:25 AM, lars hofhansl <[EMAIL PROTECTED]<javascript:;>>
> wrote:
> >
> > > Are you setting the timestamp yourself?
> > >
> > >
> > >
> > >
> > > ----- Original Message -----
> > > From: Farrokh Shahriari <[EMAIL PROTECTED] <javascript:;>>
> > > To: [EMAIL PROTECTED] <javascript:;>; lars hofhansl <
> [EMAIL PROTECTED] <javascript:;>>
> > > Cc: "[EMAIL PROTECTED] <javascript:;>" <[EMAIL PROTECTED]<javascript:;>
> >
> > > Sent: Saturday, October 12, 2013 12:25 AM
> > > Subject: Re: Why HBase reduce the size of data when writing to the disk
> > >
> > > @Lars: By simple put methods in HBase.
> > >
> > >
> > >
> > > On Fri, Oct 11, 2013 at 9:30 PM, lars hofhansl <[EMAIL PROTECTED]<javascript:;>>
> wrote:
> > >
> > > > How do you write the records?
> > > >
> > > >
> > > >
> > > > ________________________________
> > > >  From: Farrokh Shahriari <[EMAIL PROTECTED]<javascript:;>
> >
> > > > To: [EMAIL PROTECTED] <javascript:;>
> > > > Cc: "[EMAIL PROTECTED] <javascript:;>" <[EMAIL PROTECTED]<javascript:;>
> >
> > > > Sent: Friday, October 11, 2013 2:35 AM
> > > > Subject: Re: Why HBase reduce the size of data when writing to the
> disk
> > > >
> > > >
> > > > Tnx for your answer,
> > > > I didn't set any specific compression for my table or columnFamily,
> it
> > > has
> > > > the default values.
> > > >
> > > > Best wishes
> > > >
> > > >
> > > >
> > > > On Fri, Oct 11, 2013 at 12:25 PM, Ted Yu <[EMAIL PROTECTED]<javascript:;>>
> wrote:
> > > >
> > > > > Can you tell us the compression settings for the table ?
> > > > >
> > > > > See http://hbase.apache.org/book/compression.html
> > > > >
> > > > > Thanks
> > > > >
> > > > > On Oct 10, 2013, at 11:58 PM, Farrokh Shahriari <
> > > > > [EMAIL PROTECTED] <javascript:;>> wrote:
> > > > >
> > > > > > Hi,
> > > > > > There is a record which size is 30KB on a txt file. When I've
> > written
> > > > > > 50,000 records into HBase table, the UI shows that about 400MB of
> > > disk
> > > > is
> > > > > > used (not 1,500,00 KB).
> > > > > > I don't know why!!!!! Is this related to compaction or sth else.
> > > > > > Tnx for your helping.
> > > > > >
> > > > > > Best Regards
> > > > > >
> > > > > > Mohandes Zebeleh
> > > > >
> > > >
> > >
> > >
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB