Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> Hadoop's deafult FIFO scheduler


Copy link to this message
-
Re: Hadoop's deafult FIFO scheduler
Hi, Chen

I think it's due to the disk/network performance, I mean the speed of
reading the content on disk/network into the local memory

 if job3 hasn't complete data to start mappers, but job4 does, the scheduler
would select the tasks of job4 from the list to run firstly,

I think the so called FIFO principle is intended to setup stage, the firstly
arrived job would be setup firstly

Nan

On Fri, Oct 15, 2010 at 1:29 AM, He Chen <[EMAIL PROTECTED]> wrote:

> they arrived in 1 minute. I understand there will be a setup phase which
> will use any free slot no matter map or reduce.
>
> My queue time is the period between the start of Map stage and the time job
> is submitted. Because the setup phase has the higher priority than map and
> reduce tasks. Any job submitted in the queue will setup no matter how many
> previous map and reduce tasks need to be assigned.
>
> Now, I am sure the job3 setup stage finished earlier than job4's. However,
> job3's map stage start later than job4's. BTW, they request same amount of
> blocks.
>
>
> On Thu, Oct 14, 2010 at 12:10 PM, abhishek sharma <[EMAIL PROTECTED]>
> wrote:
>
> > What is the inter-arrival time between these jobs?
> >
> > There is a "set up" phase for jobs before they are launched. It is
> > possible that the order of jobs can change due to slightly different
> > set up times. Apart from the number of blocks, it may matter "where"
> > these blocks lie.
> >
> > Abhishek
> >
> > On Thu, Oct 14, 2010 at 10:06 AM, He Chen <[EMAIL PROTECTED]> wrote:
> > > Hi all
> > >
> > > I am testing the performance of my Hadoop clsuters with Hadoop Default
> > FIFO
> > > schedular. But I find a interesting phenomina.
> > >
> > > When I submit a series of jobs, some job will be executed earlier even
> > they
> > > are submitted late. All jobs are request same amount of blocks. For
> > example:
> > > job 1  submit at time 0
> > > job 2 submit at time 1
> > > job 3 submit at time 2
> > > job 4 submit at time 3
> > >
> > >
> > > job 4 's queue time is smaller than job3's queue time. This disobey the
> > FIFO
> > > principle. Any one can give a hint?
> > >
> > > Thanks
> > >
> > > Chen
> > >
> >
>