Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> RE: namenode starting error


+
Siddharth Tiwari 2012-07-08, 05:32
+
yogesh dhari 2012-07-08, 11:14
+
soham sardar 2012-06-20, 09:22
+
Mohammad Tariq 2012-06-20, 09:26
+
soham sardar 2012-06-20, 09:38
+
Mohammad Tariq 2012-06-20, 09:50
+
soham sardar 2012-06-20, 09:56
+
Mohammad Tariq 2012-06-20, 10:07
+
praveenesh kumar 2012-06-20, 12:23
+
soham sardar 2012-06-21, 07:23
Copy link to this message
-
Re: namenode starting error
Have you formatted the namenode after adding those properties??If not
then do it. Also change the permissions of all the directories to 777.
And use commands with fs and not dfs, like :
$ hadoop fs - ls /

Regards,
    Mohammad Tariq
On Thu, Jun 21, 2012 at 12:53 PM, soham sardar
<[EMAIL PROTECTED]> wrote:
> I added the conf given by Mohammad to core site and hdfs site xmls and
> i tried looking at the logs and then
> /var/log/hadoop-0.20
> i checked that there are just the out files and the log files for
> secondary namenode , datanode , jobtracker and tasktracker
> also when i try to run
> hadoop dfs -ls
>
> the output is
> soham@XPS-L501X:/var/log/hadoop-0.20$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> ls: `.': No such file or directory
>
> and when i try to run
> hadoop namenode
>
> 2/06/21 12:52:46 INFO namenode.FSNamesystem: fsOwner             > soham (auth:SIMPLE)
> 12/06/21 12:52:46 INFO namenode.FSNamesystem: supergroup          = supergroup
> 12/06/21 12:52:46 INFO namenode.FSNamesystem: isPermissionEnabled = false
> 12/06/21 12:52:46 INFO namenode.FSNamesystem: HA Enabled: false
> 12/06/21 12:52:46 INFO namenode.FSNamesystem: Append Enabled: true
> 12/06/21 12:52:47 INFO namenode.NameNode: Caching file names occuring
> more than 10 times
> 12/06/21 12:52:47 INFO common.Storage: Cannot lock storage /tmp/name.
> The directory is already locked.
> 12/06/21 12:52:47 INFO impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 12/06/21 12:52:47 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
> 12/06/21 12:52:47 INFO impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 12/06/21 12:52:47 ERROR namenode.NameNode: Exception in namenode join
> java.io.IOException: Cannot lock storage /tmp/name. The directory is
> already locked.
>        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:580)
>        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:429)
>        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:264)
>        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:180)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:498)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:390)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:354)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:389)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:423)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
> 12/06/21 12:52:47 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at XPS-L501X/127.0.0.1
> ************************************************************/
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Jun 20, 2012 at 5:53 PM, praveenesh kumar <[EMAIL PROTECTED]> wrote:
>> Is your hadoop-datastore directory is created and having proper permissions?
>> Also, Are you sure you have executed  --"hadoop namenode -format" command
>> before starting hadoop ?
>>
>> Regards,
>> Praveenesh
>>
>> On Wed, Jun 20, 2012 at 3:37 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>>
>>> Add the following properties in your config files and then do a fresh
>>> format.and then restart the processes :
>>>
>>>  1- In core-site.xml file :
>>> <property>
>>>                <name>hadoop.tmp.dir</name>
+
soham sardar 2012-06-21, 08:53
+
soham sardar 2012-06-21, 08:55
+
shashwat shriparv 2012-06-21, 11:42
+
shashwat shriparv 2012-06-21, 11:45
+
Soham Sardar 2012-06-21, 11:51
+
Soham Sardar 2012-06-21, 11:54
+
Shashwat Shriparv 2012-06-22, 07:30
+
shashwat shriparv 2012-06-21, 19:02