Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Efficient way to use different storage medium


Copy link to this message
-
Re: Efficient way to use different storage medium
Hi

Interesting topic and we have a JIRA already raised for such a feature.
 But still the work is in progress
https://issues.apache.org/jira/browse/HBASE-6572
https://issues.apache.org/jira/browse/HDFS-2832

Regards
Ram
On Tue, Apr 9, 2013 at 10:07 PM, Stack <[EMAIL PROTECTED]> wrote:

> On Tue, Apr 9, 2013 at 4:41 AM, Bing Jiang <[EMAIL PROTECTED]>
> wrote:
>
> > hi,
> >
> > There are some physical machines which each one contains a large ssd(2T)
> > and general disk(4T),
> > and we want to build our hdfs and hbase environment.
> >
>
> What kind of workload do you intend to run on these machines?  Do you have
> enough space running all of your work load on SSD?  At an extreme, you
> could have two clusters -- one running on SSDs for low latency workloads
> and the other on spinning disk -- and perhaps your segregation is such that
> having to copy between the two systems is rare, etc., etc.
>
> St.Ack
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB