Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - How does mapper process partial records?


+
Praveen Sripati 2013-01-24, 16:50
+
Harsh J 2013-01-24, 19:22
+
Praveen Sripati 2013-01-25, 00:37
Copy link to this message
-
Re: How does mapper process partial records?
Harsh J 2013-01-25, 08:50
I don't quite get what you mean - we don't have such a flaw. The first
split task makes sure to read one extra record, even if its last byte
is a newline. The subsequent splits (that is, those with offsets not
0), always ignore the first record even if it is complete in their
given range.

You may read the implementation by following the sources I've linked
here: http://search-hadoop.com/m/veN7E1gWbij/linereader&subj=Re+DFS+and+the+RecordReader
from similar questions asked in past.

On Fri, Jan 25, 2013 at 6:07 AM, Praveen Sripati
<[EMAIL PROTECTED]> wrote:
> Harsh,
>
> Thanks for the response.
>
> From http://wiki.apache.org/hadoop/HadoopMapReduce
>
>>For example TextInputFormat will read the last line of the FileSplit past
>> the split boundary and when reading other than the first FileSplit,
>> TextInputFormat ignores the content up to the first newline.
>
> When the first record in the splits other than the first split is complete
> and not spanning boundaries, then based on the above logic this particular
> record is not processed by the mapper.
>
>
> Thanks,
> Praveen
>
> Cloudera Certified Developer for Apache Hadoop CDH4 (95%)
> http://www.thecloudavenue.com/
> http://stackoverflow.com/users/614157/praveen-sripati
>
> If you aren’t taking advantage of big data, then you don’t have big data,
> you have just a pile of data.
>
>
> On Fri, Jan 25, 2013 at 12:52 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>
>> Hi Praveen,
>>
>> This is explained at http://wiki.apache.org/hadoop/HadoopMapReduce
>> [Map section].
>>
>> On Thu, Jan 24, 2013 at 10:20 PM, Praveen Sripati
>> <[EMAIL PROTECTED]> wrote:
>> > Hi,
>> >
>> > HDFS splits the file across record boundaries. So, how does the mapper
>> > processing the second block (b2) determine that the first record is
>> > incomplete and should process starting from the second record in the
>> > block
>> > (b2)?
>> >
>> > Thanks,
>> > Praveen
>>
>>
>>
>> --
>> Harsh J
>
>

--
Harsh J
+
Mohammad Tariq 2013-01-24, 17:13