Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce, mail # user - Re: how to specify the root directory of hadoop on slave node?


Copy link to this message
-
Re: how to specify the root directory of hadoop on slave node?
Hemanth Yamijala 2012-09-12, 04:06
Hi Richard,

If you have installed the hadoop software on the same locations on all
machines and if you have a common user on all the machines, then there
should be no explicit need to specify anything more on the slaves.

Can you tell us whether the above two conditions are true ? If yes, some
more details on what is failing when you run start-dfs.sh will help.

Thanks
Hemanth

On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <[EMAIL PROTECTED]>wrote:

> Hi, All
> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
> two datanodes on two other machines. But after setting datanode machiines
> in conf/slaves file, running bin/start-dfs.sh can not start hdfs
> normally..
> I am aware that I have not specify the root directory hadoop is installed
> on slave nodes and the OS user account to use hadoop on slave node.
> I am asking how to specify where hadoop/hdfs is locally installed on
> slave node? Also how to specify the user account to start hdfs there?
>
> Regards,
> Richard
>