Theoretically its possible. But as Edward pointed out, resource management
and configuration becomes tricky. Also, when you run Map-Reduce jobs over
tables in the HBase instances, you wont leverage locality since your data
would not be distributed over the entire cluster (assuming that you run
tasks across all 100 nodes).
Computer Science Graduate Student
University of California, Santa Cruz
On Thu, Apr 29, 2010 at 1:39 PM, Edward Capriolo <[EMAIL PROTECTED]>wrote:
> On Thu, Apr 29, 2010 at 4:31 PM, Michael Segel <[EMAIL PROTECTED]
> > Imagine you have a cloud of 100 hadoop nodes.
> > In theory you could create multiple instances of HBase on the cloud.
> > Obviously I don't think you could have multiple region servers running on
> > the same node.
> > The use case I was thinking about if you have a centralized hadoop cloud
> > and you wanted to have multiple developer groups sharing the cloud as a
> > resource rather than building their own clouds.
> > The reason for the multiple hbase instances is that you don't have a way
> > setting up multiple instances like different Informix or Oracle
> > databases/schemas on the same infrastructure.
> > Thx
> > -Mike
> > _________________________________________________________________
> > The New Busy is not the too busy. Combine all your e-mail accounts with
> > Hotmail.
> HOD (Hadoop on demand) works like this. You can do this type of thing a few
> ways. You can do virtualization at the OS level. If you notice carefull
> tools take a --confdir argument. You could also setup all the configuration
> files so that there are no port conflicts (essentially what HOD docs). This
> is akin to running multiple instances of apache or myself on your nodes.
> Resource management gets tricky as does the configuration files but there
> nothing techincally stopping anyone from doing this.