Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Getting error unrecognized option -jvm on starting nodemanager


Copy link to this message
-
Re: Getting error unrecognized option -jvm on starting nodemanager
the issue here is you tried one version of hadoop and then changed to a
different version.

You can not do that directly with hadoop. You need to follow a process
while upgrading hadoop versions.

For now as you are just starting with hadoop, I would recommend just run a
dfs format and start the hdfs again
On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
[EMAIL PROTECTED]> wrote:

> When i run namenode with upgrade option i get the following error and
> and namenode dosent start...
> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
> STATE* Network topology has 0 racks and 0 datanodes
> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
> STATE* UnderReplicatedBlocks has 0 blocks
> 2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 9000: starting
> 2013-12-24 14:48:38,633 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at:
> 192.168.1.2/192.168.1.2:9000
> 2013-12-24 14:48:38,633 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services
> required for active state
> 2013-12-24 14:50:50,060 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
> SIGTERM
> 2013-12-24 14:50:50,062 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>
> On 12/24/13, Sitaraman Vilayannur <[EMAIL PROTECTED]> wrote:
> > Found it,
> >  I get the following error on starting namenode in 2.2
> >
> 10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar
> > STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common
> > -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
> > STARTUP_MSG:   java = 1.7.0_45
> > ************************************************************/
> > 2013-12-24 13:25:48,876 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX
> > signal handlers for [TERM, HUP, INT]
> > 2013-12-24 13:25:49,042 INFO
> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 2013-12-24 13:25:49,102 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> > period at 10 second(s).
> > 2013-12-24 13:25:49,102 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system started
> > 2013-12-24 13:25:49,232 WARN org.apache.hadoop.util.NativeCodeLoader:
> > Unable to load native-hadoop library for your platform... using
> > builtin-java classes where applicable
> > 2013-12-24 13:25:49,375 INFO org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2013-12-24 13:25:49,410 INFO org.apache.hadoop.http.HttpServer: Added
> > global filter 'safety'
> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> > filter static_user_filter
> > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context hdfs
> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> > filter static_user_filter
> > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context static
> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> > filter static_user_filter
> > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context logs
> > 2013-12-24 13:25:49,422 INFO org.apache.hadoop.http.HttpServer:
> > dfs.webhdfs.enabled = false
> > 2013-12-24 13:25:49,432 INFO org.apache.hadoop.http.HttpServer: Jetty
> > bound to port 50070

Nitin Pawar