Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Run multiple HDFS instances


Copy link to this message
-
Re: Run multiple HDFS instances
Are you trying to implement something like namespace federation, that's a
part of Hadoop 2.0 -
http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html
On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <[EMAIL PROTECTED]> wrote:

> Actually I'm trying to do something like combining multiple namenodes so
> that they present themselves to clients as a single namespace, implementing
> basic namenode functionalities.
>
> 在 2013年4月18日星期四,Chris Embree 写道:
>
> Glad you got this working... can you explain your use case a little?   I'm
>> trying to understand why you might want to do that.
>>
>>
>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <[EMAIL PROTECTED]> wrote:
>>
>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>>  Everything looks fine now.
>>>
>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>
>>> Thanks a lot.
>>>
>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>
>>> Yes you can but if you want the scripts to work, you should have them
>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>> every time you invoke them.
>>>>
>>>> I instead prefer to start the daemons up via their direct command such
>>>> as "hdfs namenode" and so and move them to the background, with a
>>>> redirect for logging.
>>>>
>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <[EMAIL PROTECTED]>
>>>> wrote:
>>>> > Hi all,
>>>> >
>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>> > datanodes, on a single machine?
>>>> >
>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>>> the
>>>> > errors:
>>>> >
>>>> > Starting namenodes on [localhost]
>>>> > localhost: namenode running as process 20544. Stop it first.
>>>> > localhost: datanode running as process 20786. Stop it first.
>>>> > Starting secondary namenodes [0.0.0.0]
>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>> >
>>>> > Is there a way to solve this?
>>>> > Thank you in advance,
>>>> >
>>>> > Lixiang Ao
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB