Unless I'm missing something, it sounds like the OP wants to chain jobs where the results from one job are the input to another...
Of course it's Sun morning and I haven't had my first cup of coffee so I could be misinterpreting the OP's question.
If the OP wanted to send the data to each node and use it as a lookup table, his initial output is on HDFS so he could just open the file and read it in to memory in Mapper.setup().
Note: if the file is too big, then you probably wouldn't want to use distributed cache any way...
Sent from my iPhone
On Jan 28, 2012, at 7:11 PM, "Ravi Prakash" <[EMAIL PROTECTED]> wrote:
> Take a look at distributed cache for distributing data to all nodes. I'm
> not sure what you mean by messages. The MR programming paradigm is
> different from MPI.
> On Sat, Jan 28, 2012 at 5:52 AM, Oliaei <[EMAIL PROTECTED]> wrote:
>> I want to run a MR procedure under Hadoop and then send some messages &
>> to all of nodes and after that run anther MR.
>> What's the easiest way for sending data to all or some nodes? Or "Is there
>> any way to do that under Hadoop without using other frameworks?"
>> [EMAIL PROTECTED]
>> View this message in context:
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.