-Re: A new map reduce framework for iterative/pipelined jobs.
Kevin Burton 2011-12-27, 11:12
> Thanks for sharing. I'd love to play with it, do you have a
> README/user-guide for systat?
Not a ton but I could write some up...
Basically I modeled it after vmstat/iostat on Linux.
The theory is that most platforms have similar facilities so drivers could
be written per platform and then at runtime the platform is determined or
an 'unsupported' null object is returned which doesn't do anything.
The output is IO , CPU and network throughput per second for the entire
run... so it would basically be a 5 minute average per second run if the
job took 5 minutes to execute (see below for an example)
Couple of questions:
> # How does peregrine deal with the case that you might not have available
> resources to start reduces while the maps are running?
If maps are not completed we can't start a reduce phase until they complete.
Right now I don't have speculative execution turned on in the builds but
the support is there.
One *could* do the initial segment sort of the reduce but doing the full
merge isn't really helpful as even if ONE key changes you have to re-merge
all the IO.
The issue of running a ReduceMap where the output of one reduce is the
input of another map does require some coordination.
Right now my plan is to split the work load in half.. so that the buffers
are just 50% of their original values since they both have to be in play.
I'm not using the Java heap's memory but instead mmap() so I can get away
with some more fun tasks like shrinking various buffers and so forth.
> Is the map-output buffered to disk before the reduces start?
No... Right now I don't have combiners implemented ... We do directly
shuffling where IO is written directly to the reducer nodes instead of
writing to disk first.
I believe strongly *but need more evidence* that in practical loads that
direct shuffling will be far superior to the indirect shuffling mechanism
that hadoop uses.
There ARE some situations I think where indirect shuffling could solve some
pathological situations but that in practice these won't arise (and
certainly not in the pagerank impl and with our data).
We're going to buffer the IO so that about 250MB or so is put through a
combiner before sent through snappy for compression and then the result is
> # How does peregrine deal with failure of in-flight reduces (potentially
> after they have recieved X% of maps' outputs).
The reduces are our major checkpoint mode right now....
There are two solutions I'm thinking about... (and perhaps both will be
implemented in production and you can choose which strategy to use).
1. One replica of a partition starts a reduce, none of the blocks are
replicated, if it fails the whole reduce has to start again.
2. All blocks are replicated, but if a reduce fails it can just resume on
... I think #1 though in practice will be the best strategy. A physical
machine hosts about 10 partitions so even if a crash DOES happen and you
have to resume a reduce you're only doing 1/10th of the data...
And since recovery is now happening the other 9 partitions are split across
9 different hosts so the reduces there can be done in parallel.
> # How much does peregrine depend on PFS? One idea worth exploring might be
> to run peregrine within YARN (MR2) as an application. Would you be
> interested in trying that?
It depends heavily upon PFS... the block allocation is all done via the
PFS layer and these need to be deterministic or the partitioning
functionality will not work.
Also, all the IO is done through async IO ... because at 10k machines you
can't do threaded IO as it would require too much memory.
I was thinking the other day (and talking with my staff) that right now if
you view the distributed systems space there is a LOT of activity in Hadoop
because it's one of the widest deployed platforms out there..
But if you look at the *history* of computer science, we have NEVER settled
on a single OS, single programming language, single editor, etc. There is
always a set of choices out there because some tools are better suited to
the task than others.
MySQL vs Postgres, Cassandra vs Hbase, etc.. and even in a lot of cases
it's not 'versus' as some tools are better for the job than others.
I think this might be the Peregrine/Hadoop situation.
Peregrine would be VERY bad for some tasks right now for example... If you
have log files to process and just want to grok them , query them, etc...
then a Hadoop / Pig / Hive setup would be WAY easier to run and far more
My thinking is that Peregrine should just focus on the area where I think
Hadoop could use some improvement. Specifically iterative jobs and more
efficient pipelined IO...
I also think that there are a lot of ergonomic areas that collaboration
should/could happen across a number of runtimes... for example the sysstat
For our part we're going to use the Hadoop CRC32 encoder for storing blocks
Disk reads writes bytes read bytes
written Avg req size %util
---- ----- ------ ----------
sda 82,013 40,933 15,377,601
18,155,568 272.75 100.00
Interface bits rx bits tx
--------- ------- -------
lo 125 125