Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> What is the preferred way to pass a small number of configuration parameters to a mapper or reducer


Copy link to this message
-
Re: What is the preferred way to pass a small number of configuration parameters to a mapper or reducer
Answer B sounds pathologically bad to me.

A or C are the only viable options.

Neither B nor D work.  B fails because it would be extremely hard to get
the right records to the right components and because it pollutes data
input with configuration data.  D fails because statics don't work in
parallel programs.
On Fri, Dec 28, 2012 at 12:17 AM, Kshiva Kps <[EMAIL PROTECTED]> wrote:

>
> Which one is current ..
>
>
> What is the preferred way to pass a small number of configuration
> parameters to a mapper or reducer?
>
>
>
>
>
> *A.  *As key-value pairs in the jobconf object.
>
> * *
>
> *B.  *As a custom input key-value pair passed to each mapper or reducer.
>
> * *
>
> *C.  *Using a plain text file via the Distributedcache, which each mapper
> or reducer reads.
>
> * *
>
> *D.  *Through a static variable in the MapReduce driver class (i.e., the
> class that submits the MapReduce job).
>
>
>
> *Answer: B*
>
>
>