I believe in the future the spark functional style api will dominate the
big data world. Very few people will use the native mapreduce API. Even now
usually users use third-party mapreduce library such as cascading,
scalding, scoobi or script language hive, pig rather than the native
mapreduce api.
And this functional style of api compatible both with hadoop's mapreduce
and spark's RDD. The underlying execution engine will be transparent to
users. So I guess or I hope in the future, the api will be unified  while
the underlying execution engine will been choose intelligently according
the resources you have and the metadata of the data you operate on.
On Thu, Mar 6, 2014 at 9:02 AM, Edward Capriolo <[EMAIL PROTECTED]>wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB