Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase >> mail # user >> 2 differents hbase.hregion.max.filesize at the same time?


+
Jean-Marc Spaggiari 2012-11-19, 12:29
Copy link to this message
-
Re: 2 differents hbase.hregion.max.filesize at the same time?
JM,

  You can go into the shell -> disable table -> alter table command and
chance MAX_FILESIZE(I think that is what it is) this will set it at a per
table basis.

On Mon, Nov 19, 2012 at 4:29 AM, Jean-Marc Spaggiari <
[EMAIL PROTECTED]> wrote:

> Hi,
>
> I have a 400M lines table that I merged yesterday into a single
> region. I have previously splitted it wrongly. So I would like HBase
> to split it its way.
>
> The issue is that keys are very small in this table and the 400M table
> is stored on a <10G HFile.
>
> I still can use the split option on the HTML interface, but I was
> wondering if there was a way to tell to hbase that the max filesize
> for this specific table is 1G, but remains 10G for the other tables?
>
> My goal is to split this table into at least 8 pieces. So worst case,
> since I know the number of lines, I can "simply" look at x/8 lines,
> note the key, and continue. Then do the split. But is there a more
> "automatic" way to do it?
>
> Thanks,
>
> JM
>

--
Kevin O'Dell
Customer Operations Engineer, Cloudera
+
Jean-Marc Spaggiari 2012-11-19, 21:35
+
Jean-Marc Spaggiari 2012-11-19, 21:47
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB