Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - Namenode and Jobtracker dont start


Copy link to this message
-
Re: Namenode and Jobtracker dont start
Mohammad Tariq 2012-07-20, 15:44
Hi Macek,

    hadoop.tmp.dir actually belongs to core-site.xml. So,it would be better
to move it there.

On Friday, July 20, 2012, Björn-Elmar Macek <[EMAIL PROTECTED]> wrote:
> Hi Mohammad,
>
> Thanks for your fast reply. Here they are:
>
> \_____________hadoop-env.sh___
> I added those 2 lines:
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/opt/jdk1.6.0_01/
> export JAVA_OPTS="-Djava.net.preferIPv4Stack=true $JAVA_OPTS"
>
>
> \_____________core-site.xml_____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>     <property>
>         <name>fs.default.name</name>
>         <value>hdfs://its-cs100:9005</value>
>     </property>
> </configuration>
>
>
> \_____________hdfs-site.xml____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- configure data paths for masters and slaves -->
>
> <configuration>
>     <property>
>         <name>dfs.name.dir</name>
>         <value>/home/work/bmacek/hadoop/master</value>
>     </property>
>     <!-- maybe one cannot config masters and slaves on with the same file
-->
>     <property>
>         <name>dfs.data.dir</name>
> <value>/home/work/bmacek/hadoop/hdfs/slave</value>
>     </property>
>     <property>
>         <name>hadoop.tmp.dir</name>
> <value>/home/work/bmacek/hadoop/hdfs/tmp</value>
>     </property>
>
>     <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
> </configuration>
>
>
> \_______mapred-site.xml____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>     <!-- master -->
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>its-cs100:9004</value>
>     </property>
>     <!-- datanode -->
>     <property>
>         <name>dfs.hosts</name>
> <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
>     </property>
>
>     <property>
>         <name>mapred.hosts</name>
> <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
>     </property>
> </configuration>
>
> \_______masters____
> its-cs101
>
> \_______slaves______
> its-cs102
> its-cs103
>
>
> Thats about it, i think. I hope i didnt forget anything.
>
> Regards,
> Björn-Elmar
>
> Am 20.07.2012 16:58, schrieb Mohammad Tariq:
>
> Hello sir,
>
>        If possible, could you please paste your config files??
>
> Regards,
>      Mohammad Tariq
>
>
> On Fri, Jul 20, 2012 at 8:24 PM, Björn-Elmar Macek
> <[EMAIL PROTECTED]> wrote:
>
> Hi together,
>
> well just stumbled upon this post:
>
http://ankitasblogger.blogspot.de/2012/01/error-that-occured-in-hadoop-and-its.html
>
> And it says:
> "Problem: Hadoop-datanode job failed or datanode not running:
> java.io.IOException: File ../mapred/system/jobtracker.info could only be
> replicated to 0 nodes, instead of 1.
> ...
> Cause: You may also get this message due to permissions. May be JobTracker
> can not create jobtracker.info on startup."
>
> Since the file does not exist i think, this might be a probable reason for
> my errors. But why should the JobTracker not be able to create that file.
It
> created several other directories on this node with easy via the slave.sh
> script that i started with the very same user that calls start-all.sh.
>
> Any help would be really appreciated.
>
>
> Am 20.07.2012 16:15, schrieb Björn-Elmar Macek:
>
> Hi Srinivas,
>
> thanks for your reply! I have been following your link and idea and been
> playing around alot, but still got problems with the connection (though
they
> are different now):
>
> \_______ JAVA VERSION_________
> "which java" tells me it is 1.6.0_01. If i got it right version 1.7 got
> problems with ssh.
>
> \_______MY TESTS_____________
> According to your suggestion to look for processes running on that port i
> changed ports alot:
> When i was posting the first post of this thread. i was using ports 999
for
look
http://lucene.472066.n3.nabble.com/Unable-to-start-hadoop-0-20-2-but-able-to-start-hadoop-0-20-203-cluster-td2991350.html
org.apache.hadoop.metrics2.impl.MetricsConfig:
period
connect

Regards,
    Mohammad Tariq