The option you have enumerated at the end is the current way to set up
environment. That is, all the client side configurations will include the
- Logical service names (either for federation or HA)
- The corresponding physical namenode addresses information
For simpler management, one could use xml include to include an xml document
that defines all the namespaces and namenodes.
On Mon, Nov 4, 2013 at 2:02 PM, lohit <[EMAIL PROTECTED]> wrote:
> Hello Devs,
> With hadoop 1.0 when there was single namespace. One could access any HDFS
> cluster using any other hadoop config. Something like this
> hadoop --config /path/to/hadoop-cluster1 hdfs://hadoop-cluster2:8020/
> Since NameNode host and port were passed directly as part of URI, if hdfs
> client version matched, one could talk to different clusters without
> needing to have access to cluster specific configuration.
> With Hadoop 2.0 or HA mode, we only specify logical name for namenode and
> rely on hdfs-site.xml to resolve logical name to two underlying namenode
> So, you cannot do something like
> hadoop --config /path/to/hadoop-cluster1
> since /path/to/hadoop-cluster1/hdfs-site.xml do not have information about
> hadoop-cluster2-logicalname's namenodes.
> One option is to add hadoop-cluster2-logicalname's namednodes to
> /path/to/hadoop-cluster1/hdfs-site.xml. But with many clusters, this
> becomes problem.
> Is there any other cleaner approach to solving this?
> Have a Nice Day!
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.