Do you mean
(1) running mapreduce jobs from R ?
(2) Running R from a mapreduce job ?
Without much extra ceremony, for the latter, you could use either MapReduce
streaming or pig to call a custom program, as long as R is installed on
every node of the cluster itself
On Wed, Mar 26, 2014 at 6:39 AM, Saravanan Nagarajan <
[EMAIL PROTECTED]> wrote: