Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Hadoop 1.0.3 setup


Copy link to this message
-
Re: Hadoop 1.0.3 setup
from the error it looks like the port is already in use.

can you please confirm that all of the below have a different port to
operate
namenode
datanode
jobtracker
tasktracker
secondary namenode

there should not be any common port used by any of these services

On Mon, Jul 9, 2012 at 6:51 PM, prabhu K <[EMAIL PROTECTED]> wrote:

> can you please have any idea on the inline issue?
>
> On Mon, Jul 9, 2012 at 5:29 PM, prabhu K <[EMAIL PROTECTED]> wrote:
>
> > Hi users,
> >
> > I have installed hadoop 1.0.3 version, completed the single node setup.
> > and then run the start-all.sh script,
> >
> > am getting the following output.
> >
> >
> > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/bin$ ./start-all.sh
> > *Warning: $HADOOP_HOME is deprecated.*
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-namenode-md-trngpoc1.out
> > localhost: starting datanode, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-datanode-md-trngpoc1.out
> > localhost: starting secondarynamenode, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-md-trngpoc1.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-jobtracker-md-trngpoc1.out
> > localhost: starting tasktracker, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-tasktracker-md-trngpoc1.out
> >
> >
> > and I run the jps command am getting following output. am not getting the
> > namenode,datanode,jobtracker in the jps list.
> >
> >
> > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/bin$ jps
> > 20620 TaskTracker
> > 20670 Jps
> > 20347 SecondaryNameNode
> >
> >
> >
> > when i see the namenode log file, am getting the following output:
> >
> > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/logs$ more
> > hadoop-hduser-namenode-md-trngpoc1.log
> > 2012-07-09 17:05:42,989 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = md-trngpoc1/10.5.114.110
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 1.0.3
> > STARTUP_MSG:   build > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> > 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > ************************************************************/
> > 2012-07-09 17:05:43,082 INFO
> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 2012-07-09 17:05:43,089 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > MetricsSystem,sub=Stats registered.
> > 2012-07-09 17:05:43,090 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> > period at 10 second(s).
> > 2012-07-09 17:05:43,090 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system
> > started
> > 2012-07-09 17:05:43,169 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> ugi
> > registered.
> > 2012-07-09 17:05:43,174 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> jvm
> > registered.
> > 2012-07-09 17:05:43,175 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > NameNode registered.
> > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet: VM
> > type       = 32-bit
> > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> > memory = 17.77875 MB
> > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet:
> > capacity      = 2^22 = 4194304 entries
> > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet:
> > recommended=4194304, actual=4194304
> > 2012-07-09 17:05:43,211 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
> > 2012-07-09 17:05:43,211 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:

Nitin Pawar
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB