Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # dev >> Question regarding access to different hadoop 2.0 cluster


Copy link to this message
-
Question regarding access to different hadoop 2.0 cluster
Hello Devs,

With hadoop 1.0 when there was single namespace. One could access any HDFS
cluster using any other hadoop config. Something like this

hadoop --config /path/to/hadoop-cluster1 hdfs://hadoop-cluster2:8020/

Since NameNode host and port were passed directly as part of URI, if hdfs
client version matched, one could talk to different clusters without
needing to have access to cluster specific configuration.

With Hadoop 2.0 or HA mode, we only specify logical name for namenode and
rely on hdfs-site.xml  to resolve logical name to two underlying namenode
hosts.

So, you cannot do something like
hadoop --config /path/to/hadoop-cluster1
hdfs://hadoop-cluster2-logicalname/

since /path/to/hadoop-cluster1/hdfs-site.xml do not have information about
hadoop-cluster2-logicalname's namenodes.
One option is to add hadoop-cluster2-logicalname's namednodes to
/path/to/hadoop-cluster1/hdfs-site.xml. But with many clusters, this
becomes problem.
Is there any other cleaner approach to solving this?

--
Have a Nice Day!
Lohit