Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Namenode and Jobtracker dont start


+
Björn-Elmar Macek 2012-07-18, 14:29
+
Suresh Srinivas 2012-07-18, 17:47
+
Björn-Elmar Macek 2012-07-20, 14:15
+
Björn-Elmar Macek 2012-07-20, 14:54
+
Mohammad Tariq 2012-07-20, 14:58
+
Björn-Elmar Macek 2012-07-20, 15:38
Copy link to this message
-
Re: Namenode and Jobtracker dont start
Hi,

 <property>
        <name>dfs.hosts</name>
<value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
    </property>

This one is probably the cause of all your trouble. It makes the
"hosts" file a white-list of allowed nodes. Ensure, hence, that
"its-cs103.its.uni-kassel.de" is in this file for sure.

Also, dfs.hosts must be in hdfs-site.xml, and mapred.hosts in
mapred-site.xml, but you've got both of them in the latter. You should
fix this up as well.

Or if you do not need such a white-lister feature, just remove both
properties away and restart.

On Fri, Jul 20, 2012 at 9:08 PM, Björn-Elmar Macek
<[EMAIL PROTECTED]> wrote:
> Hi Mohammad,
>
> Thanks for your fast reply. Here they are:
>
> \_____________hadoop-env.sh___
> I added those 2 lines:
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/opt/jdk1.6.0_01/
> export JAVA_OPTS="-Djava.net.preferIPv4Stack=true $JAVA_OPTS"
>
>
> \_____________core-site.xml_____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>     <property>
>         <name>fs.default.name</name>
>         <value>hdfs://its-cs100:9005</value>
>     </property>
> </configuration>
>
>
> \_____________hdfs-site.xml____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- configure data paths for masters and slaves -->
>
> <configuration>
>     <property>
>         <name>dfs.name.dir</name>
>         <value>/home/work/bmacek/hadoop/master</value>
>     </property>
>     <!-- maybe one cannot config masters and slaves on with the same file
> -->
>     <property>
>         <name>dfs.data.dir</name>
> <value>/home/work/bmacek/hadoop/hdfs/slave</value>
>     </property>
>     <property>
>         <name>hadoop.tmp.dir</name>
> <value>/home/work/bmacek/hadoop/hdfs/tmp</value>
>     </property>
>
>     <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
> </configuration>
>
>
> \_______mapred-site.xml____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>     <!-- master -->
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>its-cs100:9004</value>
>     </property>
>     <!-- datanode -->
>     <property>
>         <name>dfs.hosts</name>
> <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
>     </property>
>
>     <property>
>         <name>mapred.hosts</name>
> <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
>     </property>
> </configuration>
>
> \_______masters____
> its-cs101
>
> \_______slaves______
> its-cs102
> its-cs103
>
>
> Thats about it, i think. I hope i didnt forget anything.
>
> Regards,
> Björn-Elmar
>
> Am 20.07.2012 16:58, schrieb Mohammad Tariq:
>
>> Hello sir,
>>
>>        If possible, could you please paste your config files??
>>
>> Regards,
>>      Mohammad Tariq
>>
>>
>> On Fri, Jul 20, 2012 at 8:24 PM, Björn-Elmar Macek
>> <[EMAIL PROTECTED]> wrote:
>>>
>>> Hi together,
>>>
>>> well just stumbled upon this post:
>>>
>>> http://ankitasblogger.blogspot.de/2012/01/error-that-occured-in-hadoop-and-its.html
>>>
>>> And it says:
>>> "Problem: Hadoop-datanode job failed or datanode not running:
>>> java.io.IOException: File ../mapred/system/jobtracker.info could only be
>>> replicated to 0 nodes, instead of 1.
>>> ...
>>> Cause: You may also get this message due to permissions. May be
>>> JobTracker
>>> can not create jobtracker.info on startup."
>>>
>>> Since the file does not exist i think, this might be a probable reason
>>> for
>>> my errors. But why should the JobTracker not be able to create that file.
>>> It
>>> created several other directories on this node with easy via the slave.sh
>>> script that i started with the very same user that calls start-all.sh.
>>>
>>> Any help would be really appreciated.

Harsh J
+
Mohammad Tariq 2012-07-20, 15:44
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB