Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Any reference for upgrade hadoop from 1.x to 2.2


Copy link to this message
-
Re: Any reference for upgrade hadoop from 1.x to 2.2
Not that I'm aware of.

-Sandy
On Thu, Dec 5, 2013 at 10:11 PM, Nirmal Kumar <[EMAIL PROTECTED]>wrote:

>  Thanks Sandy for the useful info.
>
>
>
> Is there any open JIRA issue for that?
>
>
>
> -Nirmal
>
>
>
> *From:* Sandy Ryza [mailto:[EMAIL PROTECTED]]
> *Sent:* Thursday, December 05, 2013 10:38 PM
>
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Any reference for upgrade hadoop from 1.x to 2.2
>
>
>
> Unfortunately there is no way to see MR1 jobs in the MR2 job history.
>
>
>
> -Sandy
>
>
>
> On Thu, Dec 5, 2013 at 3:47 AM, Nirmal Kumar <[EMAIL PROTECTED]>
> wrote:
>
>  Hi Adam,
>
>
>
> *Apache Hadoop-2.0.6-alpha *has the following issue.
>
>
>
> This issue got fixed in 2.1.0-beta<https://issues.apache.org/jira/browse/HDFS/fixforversion/12324031>
>
>
>
> 1.       Hadoop HDFS <https://issues.apache.org/jira/browse/HDFS>
>
> 2.       HDFS-4917 <https://issues.apache.org/jira/browse/HDFS-4917>
>
> *Start-dfs.sh cannot pass the parameters correctly*
>
>
>
>
> https://issues.apache.org/jira/browse/HDFS-4917?jql=project%20%3D%20HDFS%20AND%20text%20~%20upgrade
>
>
>
> I setup *Apache Hadoop **2.1.0-beta
> <https://issues.apache.org/jira/browse/HDFS/fixforversion/12324031>* and
> then were able to run the commands :
>
> ./hadoop-daemon.sh start namenode -upgrade
>
> ./hdfs dfsadmin -finalizeUpgrade
>
>
>
> 2013-12-05 21:16:44,412 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             > cloud (auth:SIMPLE)
>
> 2013-12-05 21:16:44,412 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          > supergroup
>
> 2013-12-05 21:16:44,412 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled > true
>
> 2013-12-05 21:16:44,412 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
>
> 2013-12-05 21:16:44,426 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
>
> 2013-12-05 21:16:44,908 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
>
> 2013-12-05 21:16:44,908 INFO org.apache.hadoop.util.GSet: VM type       > 32-bit
>
> 2013-12-05 21:16:44,908 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> = 889 MB
>
> 2013-12-05 21:16:44,908 INFO org.apache.hadoop.util.GSet: capacity      > 2^21 = 2097152 entries
>
> 2013-12-05 21:16:44,923 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
>
> 2013-12-05 21:16:44,930 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
>
> 2013-12-05 21:16:44,930 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
>
> 2013-12-05 21:16:44,930 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
>
> 2013-12-05 21:16:44,931 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
>
> 2013-12-05 21:16:44,932 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
>
> 2013-12-05 21:16:44,947 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map Namenode Retry Cache
>
> 2013-12-05 21:16:44,947 INFO org.apache.hadoop.util.GSet: VM type       > 32-bit
>
> 2013-12-05 21:16:44,947 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory = 889 MB
>
> 2013-12-05 21:16:44,947 INFO org.apache.hadoop.util.GSet: capacity      > 2^16 = 65536 entries
>
> 2013-12-05 21:16:45,038 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /home/cloud/hadoop_migration/hadoop-data/name/in_use.lock acquired
> by nodename [EMAIL PROTECTED]
>
> 2013-12-05 21:16:45,128 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Using clusterid: CID-4ece2cb2-6159-4836-a428-4f0e324dab13
>
> 2013-12-05 21:16:45,145 INFO
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB