-RE: hadoop dfs -ls
Leo Leung 2012-07-13, 16:24
Normally your conf should reside in /etc/hadoop/conf (if you don't have one. Copy it from the namenode - and keep it sync)
hadoop (script) by default depends on hadoop-setup.sh which depends on hadoop-env.sh in /etc/hadoop/conf
Or during runtime specify the config dir
[hdfs]$ hadoop [--config <path to your config dir>] <commands>
P.S. Some useful links:
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Dave Beech
Sent: Friday, July 13, 2012 6:18 AM
To: [EMAIL PROTECTED]
Subject: Re: hadoop dfs -ls
It's likely that your hadoop command isn't finding the right configuration.
In particular it doesn't know where your namenode is (fs.default.namesetting in core-site.xml)
Maybe you need to set the HADOOP_CONF_DIR environment variable to point to your conf directory.
On 13 July 2012 14:11, Nitin Pawar <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
> I have done setup numerous times but this time i did after some break.
> I managed to get the cluster up and running fine but when I do hadoop
> dfs -ls /
> it actually shows me contents of linux file system
> I am using hadoop-1.0.3 on rhel5.6
> Can anyone suggest what I must have done wrong?
> Nitin Pawar