I do think running a build would cause that. Doing an 'ant clean'
would resolve it for you.
But I agree that the default version may perhaps be the release itself
(although then it gets harder to identify that the user ran a build?),
please file a JIRA for further discussion.
On Fri, Dec 14, 2012 at 1:45 AM, Mark Grover
<[EMAIL PROTECTED]> wrote:
> Thanks folks.
> Zizon: The JIRA you suggested (HADOOP-8968) has a fix version of
> 1.2.0. It can't be used on the 1.1.1 release.
> Harsh: I did try rebuilding the examples jar by doing "ant examples"
> which failed (for a different reason), it seems like that changed the
> version of the datanode for some reason that caused this issue.
> The line that caused the problem in build.xml is
> <property name="version" value="1.1.2-SNAPSHOT"/>
> Is that the right thing to do? Or, should we ship build.xml with
> version value as 1.1.1 in released tarballs?
> On Wed, Dec 12, 2012 at 10:54 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>> Did you run a build on the tarball before using it, by any chance?
>> On Thu, Dec 13, 2012 at 12:03 PM, Mark Grover
>> <[EMAIL PROTECTED]> wrote:
>>> Hi all,
>>> I downloaded Hadoop-1.1.1 tar ball from one of the mirrors and
>>> configured it in psuedo-distributed mode.
>>> Namenode starts fine but datanode fails to start because of version mismatch.
>>> The value of hadoop.relaxed.worker.version.check property (related to
>>> https://issues.apache.org/jira/browse/HADOOP-8209) doesn't help in
>>> this case since the major+minor versions don't match. Sounds like a
>>> bug to me, we seem to have missed changing the datanode version before
>>> the release. Can someone please confirm?
>>> Here is the error:
>>> 2012-12-12 22:17:42,251 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible versions: namenode version 1.1.1 revision 1411108
>>> datanode version 1.1.2-SNAPSHOT revision and
>>> hadoop.relaxed.worker.version.check is enabled
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.handshake(DataNode.java:632)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:376)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:309)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1651)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751)
>> Harsh J