I don't see the similarity. If you take the case of a normal record
file, such as a text file, you read data from the next block. That is,
n-1 blocks are "opened" twice, but not read entirely in both attempts.
In the link you refer to, a specific block will always be read by all
readers if I get the format (kinda similar to what a schemaless Avro
file reader would have to do, to read the schema out from the file
I've personally written up a few record readers and measured this in
past - in both cases the extra connection requirements proved no
problem at all and hardly take up any visible time gap, given the tiny
amount of read they do. There's also no (extra) seek required, btw,
for the extra connection. We read off the head of the block until
we've found the terminating point. Likewise for header reads from a
All that said, your worry would hold true were there a format designed
in such a way that reads were expensive to just get a little bit of
required metadata, i.e. if it kept that somewhere at an offset-pointed
location in the file. Is your format similar to such a thing?
On Thu, Jun 13, 2013 at 11:27 PM, John Lilley <[EMAIL PROTECTED]> wrote:
> When MR assigns data splits to map tasks, does it assign a set of
> non-contiguous blocks to one map? The reason I ask is, thinking through the
> problem, if I were the MR scheduler I would attempt to hand a map task a
> bunch of blocks that all exist on the same datanode, and then schedule the
> map task on that node. E.g. if I have an HDFS file with 10000 blocks and I
> want to create 1000 map tasks I’d like each map task to have 10 blocks, but
> those blocks are unlikely to be contiguous on a given datanode.
> This is related to a question I had asked earlier, which is whether any
> benefit could be had by aligning data splits along block boundaries to avoid
> slopping reads of a block to the next block and requiring another datanode
> connection. The answer I got was that the extra connection overhead wasn’t
> important. The reason I bring this up again is that comments in this
> discussion (https://issues.apache.org/jira/browse/HADOOP-3315) imply that
> doing an extra seek to the beginning of the file to read a magic number on
> open is a significant overhead, and this looks like a similar issue to me.