Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: Why cannot I start namenode or localhost:50070 ?


Copy link to this message
-
Re: Why cannot I start namenode or localhost:50070 ?
Hello Charles,

   Have you added dfs.name. dir and dfs.data. dir props in your
hdfs-site.xml file??Values of these props default to the /tmp dir, so at
each restart both data and meta info is lost.
On Monday, August 27, 2012, Charles AI <[EMAIL PROTECTED]> wrote:
> thank you guys.
> the logs say my dfs.name.dir is not consistent:
> Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.
> And the namenode starts after "hadoop namenode format".
>
>
> On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>>
>> Charles,
>>
>> Can you check your NN logs to see if it is properly up?
>>
>> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <[EMAIL PROTECTED]> wrote:
>> > Hi All,
>> > I was running a cluster of one master and 4 slaves. I copied the
>> > hadoop_install folder from the master to all 4 slaves, and configured
them
>> > well.
>> > How ever when i sh start-all.sh from the master machine. It shows
below:
>> >
>> > starting namenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > master: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
>> > slave4: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
>> > slave3: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
>> > slave1: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
>> > master: starting secondarynamenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
>> > starting jobtracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > slave4: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
>> > master: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
>> > slave3: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
>> > slave1: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>> >
>> > I know that slave2 is not on. But that should not be the problem.
After this
>> > , I typed 'jps' in the master's shell, and it shows:
>> > 6907 Jps
>> > 6306 DataNode
>> > 6838 TaskTracker
>> > 6612 JobTracker
>> > 6533 SecondaryNameNode
>> >
>> > And when I opened this link "localhost:50030",the page said :
>> > master Hadoop Map/Reduce Administration
>> > Quick Links
>> > State: INITIALIZING
>> > Started: Mon Aug 27 14:54:46 CST 2012
>> > Version: 0.20.2, r911707
>> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
>> > Identifier: 201208271454
>> >
>> > I don't quite get what the "State : INITIALIZING" means. Additionally,
i
>> > cannot open "localhost:50070".
>> >
>> > So, Any suggestions ?
>> >
>> > Thanks in advance.
>> > CH
>> > --
>> > in a hadoop learning cycle
>>
>>
>>
>> --
>> Harsh J
>
>
>
> --
> in a hadoop learning cycle
>

--
Regards,
    Mohammad Tariq
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB