Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> How to give consecutive numbers to output records?


Copy link to this message
-
Re: How to give consecutive numbers to output records?
There is no in-MapReduce mechanism for cross-task synchronization. You'll
need to use something like Zookeeper for this, or another external database.
Note that this will greatly complicate your life.

If I were you, I'd try to either redesign my pipeline elsewhere to eliminate
this need, or maybe get really clever. For example, do your numbers need to
be sequential, or just unique?

If the latter, then take the byte offset into the reducer's current output
file and combine that with the reducer id (e.g.,
<current-byte-offset><zero-padded-reducer-id>) to guarantee that they're all
building unique sequences. If the former... rethink your pipeline? :)

- Aaron

On Tue, Oct 27, 2009 at 8:55 PM, Mark Kerzner <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I need to number all output records consecutively, like, 1,2,3...
>
> This is no problem with one reducer, making recordId an instance variable
> in
> the Reducer class, and setting conf.setNumReduceTasks(1)
>
> However, it is an architectural decision forced by processing need, where
> the reducer becomes a bottleneck. Can I have a global variable for all
> reducers, which would give each the next consecutive recordId? In the
> database scenario, this would be the unique autokey. How to do it in
> MapReduce?
>
> Thank you
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB