Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> Re: Storing images in Hbase


+
Michael Segel 2013-01-11, 15:00
+
Mohammad Tariq 2013-01-11, 15:27
+
Mohit Anchlia 2013-01-11, 17:40
+
Jack Levin 2013-01-11, 17:47
+
Jack Levin 2013-01-11, 17:51
+
Mohit Anchlia 2013-01-13, 15:47
+
kavishahuja 2013-01-05, 10:11
+
谢良 2013-01-06, 03:58
+
Mohit Anchlia 2013-01-06, 05:45
+
谢良 2013-01-06, 06:14
+
Damien Hardy 2013-01-06, 09:35
+
Yusup Ashrap 2013-01-06, 11:58
+
Andrew Purtell 2013-01-06, 20:12
+
Asaf Mesika 2013-01-06, 20:28
+
Andrew Purtell 2013-01-06, 20:49
+
Andrew Purtell 2013-01-06, 20:52
+
Mohit Anchlia 2013-01-06, 21:09
+
Amandeep Khurana 2013-01-06, 20:33
+
Marcos Ortiz 2013-01-11, 18:01
+
Jack Levin 2013-01-13, 16:17
Copy link to this message
-
Re: Storing images in Hbase
Hey Jack,

Thanks for the useful information. By flush size being 15 %, do you mean
the memstore flush size ? 15 % would mean close to 1G, have you seen any
issues with flushes taking too long ?

Thanks
Varun

On Sun, Jan 13, 2013 at 8:17 AM, Jack Levin <[EMAIL PROTECTED]> wrote:

> That's right, Memstore size , not flush size is increased.  Filesize is
> 10G. Overall write cache is 60% of heap and read cache is 20%.  Flush size
> is 15%.  64 maxlogs at 128MB. One namenode server, one secondary that can
> be promoted.  On the way to hbase images are written to a queue, so that we
> can take Hbase down for maintenance and still do inserts later.  ImageShack
> has ‘perma cache’ servers that allows writes and serving of data even when
> hbase is down for hours, consider it 4th replica 😉 outside of hadoop
>
> Jack
>
>  *From:* Mohit Anchlia <[EMAIL PROTECTED]>
> *Sent:* ‎January‎ ‎13‎, ‎2013 ‎7‎:‎48‎ ‎AM
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Storing images in Hbase
>
> Thanks Jack for sharing this information. This definitely makes sense when
> using the type of caching layer. You mentioned about increasing write
> cache, I am assuming you had to increase the following parameters in
> addition to increase the memstore size:
>
> hbase.hregion.max.filesize
> hbase.hregion.memstore.flush.size
>
> On Fri, Jan 11, 2013 at 9:47 AM, Jack Levin <[EMAIL PROTECTED]> wrote:
>
> > We buffer all accesses to HBASE with Varnish SSD based caching layer.
> > So the impact for reads is negligible.  We have 70 node cluster, 8 GB
> > of RAM per node, relatively weak nodes (intel core 2 duo), with
> > 10-12TB per server of disks.  Inserting 600,000 images per day.  We
> > have relatively little of compaction activity as we made our write
> > cache much larger than read cache - so we don't experience region file
> > fragmentation as much.
> >
> > -Jack
> >
> > On Fri, Jan 11, 2013 at 9:40 AM, Mohit Anchlia <[EMAIL PROTECTED]>
> > wrote:
> > > I think it really depends on volume of the traffic, data distribution
> per
> > > region, how and when files compaction occurs, number of nodes in the
> > > cluster. In my experience when it comes to blob data where you are
> > serving
> > > 10s of thousand+ requests/sec writes and reads then it's very difficult
> > to
> > > manage HBase without very hard operations and maintenance in play. Jack
> > > earlier mentioned they have 1 billion images, It would be interesting
> to
> > > know what they see in terms of compaction, no of requests per sec. I'd
> be
> > > surprised that in high volume site it can be done without any Caching
> > layer
> > > on the top to alleviate IO spikes that occurs because of GC and
> > compactions.
> > >
> > > On Fri, Jan 11, 2013 at 7:27 AM, Mohammad Tariq <[EMAIL PROTECTED]>
> > wrote:
> > >
> > >> IMHO, if the image files are not too huge, Hbase can efficiently serve
> > the
> > >> purpose. You can store some additional info along with the file
> > depending
> > >> upon your search criteria to make the search faster. Say if you want
> to
> > >> fetch images by the type, you can store images in one column and its
> > >> extension in another column(jpg, tiff etc).
> > >>
> > >> BTW, what exactly is the problem which you are facing. You have
> written
> > >> "But I still cant do it"?
> > >>
> > >> Warm Regards,
> > >> Tariq
> > >> https://mtariq.jux.com/
> > >>
> > >>
> > >> On Fri, Jan 11, 2013 at 8:30 PM, Michael Segel <
> > [EMAIL PROTECTED]
> > >> >wrote:
> > >>
> > >> > That's a viable option.
> > >> > HDFS reads are faster than HBase, but it would require first hitting
> > the
> > >> > index in HBase which points to the file and then fetching the file.
> > >> > It could be faster... we found storing binary data in a sequence
> file
> > and
> > >> > indexed on HBase to be faster than HBase, however, YMMV and HBase
> has
> > >> been
> > >> > improved since we did that project....
> > >> >
> > >> >
> > >> > On Jan 10, 2013, at 10:56 PM, shashwat shriparv <
+
Jack Levin 2013-01-20, 19:49
+
Varun Sharma 2013-01-22, 01:10
+
Varun Sharma 2013-01-22, 01:12
+
Jack Levin 2013-01-24, 04:53
+
S Ahmed 2013-01-24, 22:13
+
Jack Levin 2013-01-25, 07:41
+
S Ahmed 2013-01-27, 02:00
+
Jack Levin 2013-01-27, 02:56
+
yiyu jia 2013-01-27, 15:37
+
Jack Levin 2013-01-27, 16:56
+
yiyu jia 2013-01-27, 21:58
+
Jack Levin 2013-01-28, 04:06
+
Jack Levin 2013-01-28, 04:16
+
Andrew Purtell 2013-01-28, 18:58
+
yiyu jia 2013-01-28, 20:23
+
Andrew Purtell 2013-01-28, 21:13
+
yiyu jia 2013-01-28, 21:44
+
Andrew Purtell 2013-01-28, 21:49
+
Adrien Mogenet 2013-01-28, 10:01
+
Jack Levin 2013-01-28, 18:08