Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Configure Secondary Namenode


Copy link to this message
-
Re: Configure Secondary Namenode
2010/8/18 xiujin yang <[EMAIL PROTECTED]>:
>
> Hi Adarsh,
>
> Please check start-dfs.sh
>
> You will find
>
> "$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
> "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
> "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start secondarynamenode
>
> Default secondarynamenode is run on "masters".
>
> You can change this shell, such as you can change the last row:
>
> "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode start secondarynamenode
>
>
> Create a file "conf/secondarynamenode"  & list machine name in it.
>
>
> Best,
>
>
> Xiujin Yang.
>
>> Date: Wed, 18 Aug 2010 13:08:03 +0530
>> From: [EMAIL PROTECTED]
>> To: [EMAIL PROTECTED]
>> Subject: Configure Secondary Namenode
>>
>> I am not able to find any command or parameter in core-default.xml to
>> configure secondary namenode on separate machine.
>> I have a 4-node cluster with jobtracker,master,secondary namenode on one
>> machine
>> and remaining 3 are slaves.
>> Can anyone please tell me.
>>
>> Thanks in Advance
>

dfs.http.address defaults to something like localhost:50070
If you start a secondary name node on a separate machine then your
name node you to set dfs.http.address to the hostname and port of your
namenode, otherwise the secondary name node will not know how to
connect,extract, and report to the name node.