It depends on your configuration where your DNs and TTs will run. If
you have configured all your slaves to run both the processes then they
should. If they are not running then there is definitely some problem.
Could you please check your DN logs once and see if you find anything
unusual there. And you have to copy the files across all the machines.
You can do one more thing just to cross check. Point your web browser to
the HDFS web UI(master_machine:9000) to do that.
On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <[EMAIL PROTECTED]> wrote:
> Hi ,
> I am new to Hadoop and I am fighting with this last 20days, somehow I got
> very good stuff on Hadoop.
> But, some question are roaming around me...I hope, I can get the answers
> from your end...!
> I was setup a cluster in distributed mode with 5 nodes. I have configured
> Namenode and DataNodes and all datannodes are able to loging from namenode
> without password.
> Hadoop and Java installed on same location in all the Nodes. After
> starting the cluster, I was check every node using with "jps" command.
> NameNode it was shows that all demons
> Same process is I applied for Datanodes. But, Some nodes only showing that
> TaskTracer running, only one node shows that DataNode and TaskTracker runs
> My Question is that the configuration files are required to copy all the
> nodes which is located in $HADOOP_HOME/conf directory?
> And why that DataNode is not running on remaining nodes?
> Please clarify this doubts, so that I can able to move ahead... :)
> Thank you,
> M Shaik