Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> namenode instantiation error


+
anand sharma 2012-08-09, 10:16
+
Mohammad Tariq 2012-08-09, 10:21
+
shashwat shriparv 2012-08-09, 10:28
+
rahul p 2012-08-09, 11:50
Copy link to this message
-
Re: namenode instantiation error
Hello Rahul,

   That's great. That's the best way to learn(I am doing the same :)
). Since the installation part is over, I would suggest to get
yourself familiar with Hdfs and MapReduce first. Try to do basic
filesystem operations using the Hdfs API and run the wordcount
program, if you haven't done it yet. Then move ahead.

Regards,
    Mohammad Tariq
On Thu, Aug 9, 2012 at 5:20 PM, rahul p <[EMAIL PROTECTED]> wrote:
> Hi Tariq,
>
> I am also new to Hadoop trying to learn my self can anyone help me on the
> same.
> i have installed CDH3.
>
>
>
> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <[EMAIL PROTECTED]>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> > cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> > and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> > May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> > initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
+
rahul p 2012-08-09, 12:57
+
rahul p 2012-08-09, 14:29
+
Mohammad Tariq 2012-08-09, 12:05
+
anand sharma 2012-08-09, 12:41
+
Abhishek 2012-08-09, 12:59
+
anand sharma 2012-08-10, 04:06
+
Owen Duan 2012-08-09, 12:53
+
Nitin Pawar 2012-08-09, 12:58
+
anand sharma 2012-08-10, 04:07
+
Vinayakumar B 2012-08-10, 04:44
+
Nitin Pawar 2012-08-09, 10:20
+
anand sharma 2012-08-10, 11:05
+
Harsh J 2012-08-10, 12:04
+
Mohammad Tariq 2012-08-10, 14:21
+
anand sharma 2012-08-11, 12:43
+
Mohamed Trad 2012-08-11, 15:55
+
shashwat shriparv 2012-08-10, 07:36
+
anand sharma 2012-08-10, 10:59