Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Large Data Size in Row or Value?


Copy link to this message
-
Large Data Size in Row or Value?
I have a chunk of data (let's say 400M) that I want to store in Accumulo. I
can store the chunk in the ColumnFamily or in the Value. Does it make any
difference to Accumulo which is used?

My tserver is setup to use -Xmx3g. What is the largest size that seems to
work? I have much more  that I can allocate.

Or should I focus on breaking the data into smaller pieces ... say 128M
each?

Thanks.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB