Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # user >> Large Data Size in Row or Value?

David Medinets 2013-04-01, 14:33
Copy link to this message
Re: Large Data Size in Row or Value?
"What is the largest size that seems to work?"

Tablet servers have been run in 64M JVMs without a problem, so long as
there isn't any other pressure to swap that memory out (such as large
map/reduce jobs).  Since we've been keeping the New Generation size
down ("-XX:NewSize=500m
-XX:MaxNewSize=500m") we haven't seen any problems with long pauses in the

We may have run them at larger sizes, but not for very long.  The example
configurations are there for seeing up a single node in your personal
development space, so the emphasis was on smaller memory footprints.

On Mon, Apr 1, 2013 at 10:33 AM, David Medinets <[EMAIL PROTECTED]>wrote:

> I have a chunk of data (let's say 400M) that I want to store in Accumulo.
> I can store the chunk in the ColumnFamily or in the Value. Does it make any
> difference to Accumulo which is used?
> My tserver is setup to use -Xmx3g. What is the largest size that seems to
> work? I have much more  that I can allocate.
> Or should I focus on breaking the data into smaller pieces ... say 128M
> each?
> Thanks.
Josh Elser 2013-04-01, 14:55
Chris Sigman 2013-04-01, 15:07