Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> PutSortReducer memory threshold

Copy link to this message
PutSortReducer memory threshold
Looking at the code of PutSortReducer I see that if my key has puts with
size bigger than memory, the iteration stops and all puts up to the
threshold point will be written to context.
If iterator has more puts,  context.write(null,null) is executed.
Does this tell the bulk load tool to re-execute the reduce from that point
in some way (if so, how ?) or the rest of the data is just omitted ?