Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> namenode instantiation error


Copy link to this message
-
Re: namenode instantiation error
Thannks  Tariq  , i already have.

On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:

> Hello Anand,
>
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
>
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
>
> Regards,
>     Mohammad Tariq
>
>
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <[EMAIL PROTECTED]> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <[EMAIL PROTECTED]>
> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <[EMAIL PROTECTED]>
> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try
> and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build > >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB