Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Storing extremely large size file


Copy link to this message
-
Re: Storing extremely large size file
Thank you all. Practice makes perfect :)

On Tue, Apr 17, 2012 at 5:46 PM, Michael Segel <[EMAIL PROTECTED]>wrote:

> In theory, you could go as large as a region size minus the key and
> overhead. (rows can't span regions)
>
> Realistically you'd want to go much smaller.
>
>
> Sent from my iPhone
>
> On Apr 17, 2012, at 1:49 PM, "Wei Shung Chung" <[EMAIL PROTECTED]> wrote:
>
> > What would be the max affordable size one could have ?
> >
> > Sent from my iPhone
> >
> > On Apr 17, 2012, at 1:42 PM, Dave Revell <[EMAIL PROTECTED]> wrote:
> >
> >> +1 Jack :)
> >>
> >> On Tue, Apr 17, 2012 at 11:38 AM, Stack <[EMAIL PROTECTED]> wrote:
> >>
> >>> On Tue, Apr 17, 2012 at 11:18 AM, Dave Revell <[EMAIL PROTECTED]>
> >>> wrote:
> >>>> I think this is a popular topic that might deserve a section in The
> Book.
> >>>>
> >>>> By "this topic" I mean storing big binary chunks.
> >>>>
> >>>
> >>> Get Jack Levin to write it (smile).
> >>>
> >>> And make sure the values are compressed that you send over from the
> >>> client....
> >>>
> >>> St.Ack
> >>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB