If you have installed the hadoop software on the same locations on all
machines and if you have a common user on all the machines, then there
should be no explicit need to specify anything more on the slaves.
Can you tell us whether the above two conditions are true ? If yes, some
more details on what is failing when you run start-dfs.sh will help.
On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <[EMAIL PROTECTED]>wrote:
> Hi, All
> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
> two datanodes on two other machines. But after setting datanode machiines
> in conf/slaves file, running bin/start-dfs.sh can not start hdfs
> I am aware that I have not specify the root directory hadoop is installed
> on slave nodes and the OS user account to use hadoop on slave node.
> I am asking how to specify where hadoop/hdfs is locally installed on
> slave node? Also how to specify the user account to start hdfs there?