Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Re: In Compatible clusterIDs


Copy link to this message
-
Re: In Compatible clusterIDs
/etc/hosts

127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost

and all configuration files pointing to   nagarjuuna  and not localhost.
 Gave me the above error

127.0.0.1       localhost
127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost
and all configuration files pointing to localhost and not nagarjuna, I am
able to successsfully start the cluster.
Does it have something to do with password less ssh ?

On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <[EMAIL PROTECTED]>wrote:

> Hi Nagarjuna,****
>
> ** **
>
> What’s is in your /etc/hosts file? I think the line in logs where it says
> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
> and not 0.0.0.0.****
>
> ** **
>
> By the way are you using the dfs.hosts parameter for specifying the
> datanodes that can connect to the namenode?****
>
> ** **
>
> Vijay****
>
> ** **
>
> *From:* nagarjuna kanamarlapudi [mailto:[EMAIL PROTECTED]]
>
> *Sent:* 20 February 2013 15:52
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: In Compatible clusterIDs****
>
> ** **
>
> ** **
>
> Hi Jean Marc,****
>
> ** **
>
> Yes, this is the cluster I am trying  to create and then will scale up.***
> *
>
> ** **
>
> As per your suggestion I deleted the folder
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
> formatted the cluster.****
>
> ** **
>
> ** **
>
> Now I get the following error.****
>
> ** **
>
> ** **
>
> 2013-02-20 21:17:25,668 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
> 124.123.215.187:9000****
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
> infoPort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> ****
>
>         at org.apache.hadoop.ipc.Protob****
>
> ** **
>
> ** **
>
> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
> [EMAIL PROTECTED]> wrote:****
>
> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <[EMAIL PROTECTED]>:***
> *
>
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB