HDFS is agnostic about the contents of the data you store. Think about
it: Line ending character is not the universal way for files to
separate their records.
This question's been asked several times before (search on
http://search-hadoop.com for example). Read
http://wiki.apache.org/hadoop/HadoopMapReduce to understand how
despite HDFS splitting at exactly the 64th megabyte, MR (or other HDFS
file reading operations) make sure to read records whole.
On Sat, Jan 4, 2014 at 3:59 PM, VJ Shalish <[EMAIL PROTECTED]> wrote:
> While creating the blocks for a file containing n number of lines, how does
> Hadoop take care of the problem of not Cutting a line in between while
> creating blocks?
> Is it taken care of by Hadoop?