Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Re: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster


+
srinivas 2012-01-25, 06:54
+
Xu, Richard 2011-05-27, 00:01
+
DAN 2011-05-27, 02:22
+
Xu, Richard 2011-05-27, 11:34
+
Simon 2011-05-27, 12:30
+
DAN 2011-05-27, 14:26
Copy link to this message
-
Re: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster

On May 27, 2011, at 7:26 AM, DAN wrote:
> You see you have "2 Solaris servers for now", and dfs.replication is setted as 3.
> These don't match.
That doesn't matter.  HDFS will basically flag any files written with a warning that they are under-replicated.

The problem is that the datanode processes aren't running and/or aren't communicating to the namenode. That's what the "java.io.IOException: File /tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1" means.

It should also be pointed out that writing to /tmp (the default) is a bad idea.  This should get changed.

Also, since you are running Solaris, check the FAQ on some settings you'll need to do in order to make Hadoop's broken username detection to work properly, amongst other things.
+
Xu, Richard 2011-05-27, 20:18
+
Allen Wittenauer 2011-05-27, 22:52
+
Konstantin Boudnik 2011-05-27, 03:18
+
Harsh J 2011-05-27, 13:20
+
Xu, Richard 2011-05-27, 21:32
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB