Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Deployment Best Practices


Copy link to this message
-
Re: Deployment Best Practices
I've used puppet for the job.  Adobe posted a set of puppet scripts a while
back that would deploy a tar.gz.  They were my starting point (here:
http://hstack.org/hstack-automated-deployment-using-puppet/).  Those
configs and values are pretty old so make sure you look at the docs for
whatever version you are deploying. Even if you deploy through rpm/deb I
would highly recommend some sort of config control; be it puppet, chef, or
cf engine.  The time taken to learn and set that up will pay itself back in
the long term. There are a lot of different moving pieces in a hadoop MR /
HDFS / HBase cluster and they will take some tuning.

On Wed, May 30, 2012 at 2:08 PM, Peter Naudus <[EMAIL PROTECTED]> wrote:

> Hello All,
>
> Is there a "community standard" / "best" way to deploy HBase to a cluster?
> We're in the process of setting up a ~15 node cluster and I'm curious how
> you all go about your deployments. Do you package the code into an RPM,
> place it into a central YUM repository, and then drive the install via
> Puppet? Or do you rsync the code and use shell scripts? Or all the above?
>
> Thanks so much for your input!
>
> ~ Peter
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB