Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Can you help me to install HDFS Federation and test?


Copy link to this message
-
Re: Can you help me to install HDFS Federation and test?
It shud be visible from every namenode machine have you tried this commmand

 bin/hdfs dfs -ls /yourdirectoryname/
On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <[EMAIL PROTECTED]>wrote:

> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <[EMAIL PROTECTED]>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <[EMAIL PROTECTED]> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB