Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> hadoop namenode problem


Copy link to this message
-
Re: hadoop namenode problem
All looks fine to me..change the line "127.0.1.1" in your hosts file
to "127.0.0.1" and see if it works for you.

Regards,
    Mohammad Tariq
On Fri, Jun 15, 2012 at 4:14 PM, soham sardar <[EMAIL PROTECTED]> wrote:
> configuration in the sense i have given the following configs
>
> hdfs-site
>
> <property>
>  <name>dfs.replication</name>
>  <value>1</value>
>  <description>Default block replication.
>  The actual number of replications can be specified when the file is created.
>  The default is used if replication is not specified in create time.
>  </description>
> </property>
>
> core-site
>
> <property>
>  <name>hadoop.tmp.dir</name>
>  <value>/app/hadoop/tmp</value>
>  <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>  <name>fs.default.name</name>
>  <value>hdfs://localhost:54310</value>
>  <description>The name of the default file system.  A URI whose
>  scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>  determine the host, port, etc. for a filesystem.</description>
> </property>
>
> and yarn-site
>
> <property>
>    <name>yarn.resourcemanager.resource-tracker.address</name>
>    <value>localhost:8031</value>
>    <description>host is the hostname of the resource manager and
>    port is the port on which the NodeManagers contact the Resource Manager.
>    </description>
>  </property>
>
>  <property>
>    <name>yarn.resourcemanager.scheduler.address</name>
>    <value>localhost:8030</value>
>    <description>host is the hostname of the resourcemanager and port
> is the port
>    on which the Applications in the cluster talk to the Resource Manager.
>    </description>
>  </property>
>
>  <property>
>    <name>yarn.resourcemanager.scheduler.class</name>
>    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
>    <description>In case you do not want to use the default
> scheduler</description>
>  </property>
>
>  <property>
>    <name>yarn.resourcemanager.address</name>
>    <value>localhost:8032</value>
>    <description>the host is the hostname of the ResourceManager and
> the port is the port on
>    which the clients can talk to the Resource Manager. </description>
>  </property>
>
>  <property>
>    <name>yarn.nodemanager.local-dirs</name>
>    <value></value>
>    <description>the local directories used by the nodemanager</description>
>  </property>
>
>  <property>
>    <name>yarn.nodemanager.address</name>
>    <value>127.0.0.1:8041</value>
>    <description>the nodemanagers bind to this port</description>
>  </property>
>
>  <property>
>    <name>yarn.nodemanager.resource.memory-mb</name>
>    <value>10240</value>
>    <description>the amount of memory on the NodeManager in GB</description>
>  </property>
>
>  <property>
>    <name>yarn.nodemanager.remote-app-log-dir</name>
>    <value>/app-logs</value>
>    <description>directory on hdfs where the application logs are
> moved to </description>
>  </property>
>
>   <property>
>    <name>yarn.nodemanager.log-dirs</name>
>    <value></value>
>    <description>the directories used by Nodemanagers as log
> directories</description>
>  </property>
>
>  <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce.shuffle</value>
>    <description>shuffle service that needs to be set for Map Reduce
> to run </description>
>  </property>
>
> is there i need to make any other changes ????
>
>
> On Fri, Jun 15, 2012 at 4:10 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>> Hi Soham,
>>
>>      Have you mentioned all the necessary properties in the
>> configuration files??Also make sure your hosts file is ok.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Fri, Jun 15, 2012 at 3:53 PM, soham sardar <[EMAIL PROTECTED]> wrote:
>>> hey friends !!
>>>
>>> I have downloaded the cdh4 tarballs and kept in a folder and try to
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB