Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: Writing intermediate key,value pairs to file and read it again


Copy link to this message
-
Re: Writing intermediate key,value pairs to file and read it again
How many intermediate keys? If small enough, you can keep them in memory.  If large, you can just wait for the job to finish and siphon them into your job as input with the MultipleInputs API.

On Apr 20, 2013, at 10:43 AM, Vikas Jadhav <[EMAIL PROTECTED]> wrote:

> Hello,
> Can anyone help me in following issue
> Writing intermediate key,value pairs to file and read it again
>
> let us say i have to write each intermediate pair received @reducer to a file and again read that as key value pair again and use it for processing
>
> I found IFile.java file which has reader and writer but i am not able understand how to use it for example. I dont understand Counter value as last parameter "spilledRecordsCounter"
>
>
> Thanks.
>
>
> --
>
>
>   Regards,
>    Vikas
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB