Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Question about the task assignment strategy


Copy link to this message
-
Re: Question about the task assignment strategy
Hi,

I tried a similar experiment as yours but couldn't replicate the issue.

I generated 64 MB files and added them to my DFS - one file from every
machine, with a replication factor of 1,  like you did. My block size was
64MB. I verified the blocks were located on the same machine as where I
added them from.

Then, I launched a wordcount (without the min split size config). As
expected, it created 8 maps, and I could verify that it ran all the tasks
as data local - i.e. every task read off its own datanode. From the launch
times of the tasks, I could roughly feel that this scheduling behaviour was
independent of the order in which the tasks were launched. This behaviour
was retained even with the min split size config.

Could you share the size of input you generated (i.e the size of
data01..data14) ? Also, what job are you running - specifically what is the
input format ?

BTW, this wiki entry:
http://wiki.apache.org/hadoop/HowManyMapsAndReducestalks a little bit
about how the maps are created.

Thanks
Hemanth

On Wed, Sep 12, 2012 at 7:49 AM, Hiroyuki Yamada <[EMAIL PROTECTED]> wrote:

> I figured out the cause.
> HDFS block size is 128MB, but
> I specify mapred.min.split.size as 512MB,
> and data local I/O processing goes wrong for some reason.
> When I remove the mapred.min.split.size configuration,
> tasktrackers pick data-local tasks.
> Why does it happen ?
>
> It seems like a bug.
> Split is a logical container of blocks,
> so nothing is wrong logically.
>
> On Wed, Sep 12, 2012 at 1:20 AM, Hiroyuki Yamada <[EMAIL PROTECTED]>
> wrote:
> > Hi, thank you for the comment.
> >
> >> Task assignment takes data locality into account first and not block
> sequence.
> >
> > Does it work like that when replica factor is set to 1 ?
> >
> > I just had a experiment to check the behavior.
> > There are 14 nodes (node01 to node14) and there are 14 datanodes and
> > 14 tasktrackers working.
> > I first created a data to be processed in each node (say data01 to
> data14),
> > and I put the each data to the hdfs from each node (at /data
> > directory. /data/data01, ... /data/data14).
> > Replica factor is set to 1, so according to the default block placement
> policy,
> > each data is stored at local node. (data01 is stored at node01, data02
> > is stored at node02 and so on)
> > In that setting, I launched a job that processes the /data and
> > what happened is that tasktrackers read from data01 to data14
> sequentially,
> > which means tasktrackers first take all data from node01 and then
> > node02 and then node03 and so on.
> >
> > If tasktracker takes data locality into account as you say,
> > each tasktracker should take the local task(data). (tasktrackers at
> > node02 should take data02 blocks if there is any)
> > But, it didn't work like that.
> > What this is happening ?
> >
> > Is there any documents about this ?
> > What part of the source code is doing that ?
> >
> > Regards,
> > Hiroyuki
> >
> > On Tue, Sep 11, 2012 at 11:27 PM, Hemanth Yamijala
> > <[EMAIL PROTECTED]> wrote:
> >> Hi,
> >>
> >> Task assignment takes data locality into account first and not block
> >> sequence. In hadoop, tasktrackers ask the jobtracker to be assigned
> tasks.
> >> When such a request comes to the jobtracker, it will try to look for an
> >> unassigned task which needs data that is close to the tasktracker and
> will
> >> assign it.
> >>
> >> Thanks
> >> Hemanth
> >>
> >>
> >> On Tue, Sep 11, 2012 at 6:31 PM, Hiroyuki Yamada <[EMAIL PROTECTED]>
> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I want to make sure my understanding about task assignment in hadoop
> >>> is correct or not.
> >>>
> >>> When scanning a file with multiple tasktrackers,
> >>> I am wondering how a task is assigned to each tasktracker .
> >>> Is it based on the block sequence or data locality ?
> >>>
> >>> Let me explain my question by example.
> >>> There is a file which composed of 10 blocks (block1 to block10), and
> >>> block1 is the beginning of the file and block10 is the tail of the
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB