Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> OutputFormat and Reduce Task


Copy link to this message
-
OutputFormat and Reduce Task
I'm trying to optimize the performance of my OutputFormat's
implementation. I'm doing things similar to HBase's
TableOutputFormat--sending the reducer's output to a distributed k-v store.
So, the context.write() call basically winds up doing a Put() on the store.

Although I haven't profiled, a sequence of thread dumps on the reduce tasks
reveal that the threads are RUNNABLE and hanging out in the put() and its
subsequent method calls. So, I proceeded to decouple these two by
implementing the producer (context.write()) consumer (RecordWriter.write())
pattern using ExecutorService.

My understanding is that Context.write() calls RecordWriter.write() and
that these two are synchronous calls. The first will block until the second
method completes.Each reduce phase blocks until the context.write()
finishes, so the next reduce on the next key also blocks, making things run
slow in my case. Is this correct? Does this mean that OutputFormat is
instantiated once by the TaskTracker for the Job's reduce logic and all
keys operated on by the reducers get the same instance of the OutputFormat.
Or, is it that for each key operated by the reducer, a new OutputFormat is
instantiated?

Thanks,
Dhruv
+
Harsh J 2012-11-02, 03:14
+
Dhruv 2012-11-02, 17:35
+
Harsh J 2012-11-02, 17:47