Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # general >> Java Versions and Hadoop


Copy link to this message
-
Re: Java Versions and Hadoop
On 10/10/11 19:57, Scott Carey wrote:

> What JRE (6 update ?) is planned to be used when testing 0.23 at scale?
> Should JRE 7u2 also be tested?  Both a new update to JRE 6 and 7 is due
> out very soon.  0.23 will be code complete after that.  If I had enough
> resources and time, I'd test both the latest JRE 6 and JRE 7.

Makes sense. Ideally anyone planning to move to 0.23 should bring up
some kind of cluster running that code on their chosen JVM, with their
own algorithms, just to see what the outcome is. Too bad nobody has a
large idle cluster with data they don't care about. EMC have just
announced one though, and HortonWorks and Cloudera will also have
clusters. That doesn't mean you shouldn't test on your own
hardware/OS/network/application setup.
>
> A performance regression for Hadoop's pure java CRC32 happened in a recent
> JRE6 update, and a bug was filed, and they fixed it and now include that
> algorithm in their test suite.  JVM releases don't include whole stacks,
> but someone could engage the OpenJDK developers to find out what kind of
> contributions OpenJDK can accept for test code -- I'm not sure how
> compatible it is with Apache.
> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2011-July/00597
> 1.html
> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2011-September/
> 006289.html

Thanks, Scott, that is really informative

>
>>
>> In the meantime, even if Oracle say Java6 is EOL, if people pay money to
>> keep it alive -and they will have to in any project you don't want to
>> have to requalify for java7- then it may keep going for longer, except
>> the updates won't be so widely available.
>
> You can always keep running on the old JVM with the old version of Hadoop
> you have had in your cluster, but if you upgrade Hadoop to a new version,
> you might as well upgrade your JVM at the same time and pay the testing
> cost once.

I have mixed feelings about that. You may be introducing too many
variables at once.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB