As per the Mapreduce behavior, mapper will process all the input file(s) in
parallel. i.e., no order is guaranteed among the input files.
If you want to process each file separately and maintain the order then
you need to process each file separately (in an independent mapreduce job)
so that your client is responsible for processing individual file in order.
On Tue, Jan 15, 2013 at 7:55 PM, Panshul Whisper <[EMAIL PROTECTED]>wrote:
> I was wondering if hadoop performs the map reduce operations on the data
> in maintaining he order or sequence of data in which it received the data.
> I have a hadoop cluster that is receiving json files.. Which are processed
> and then stored on base.
> For correct calculation it is essential for the json files to be processed
> on the order they are received. How can I make sure this happens.
> Thanking you,
> Ouch Whisper