Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> custom FileInputFormat class

Copy link to this message
RE: custom FileInputFormat class
Hi John,

You can extend  FileInputFormat(or implement InputFormat) and then you need to implement below methods.

1. InputSplit[] getSplits(JobConf job, int numSplits)  : For splitting the input files logically for the job. If FileInputFormat.getSplits(JobConf job, int numSplits) suits for your requirement, you can make use of it. Otherwise you can implement it based on your need.

2. RecordReader<K,V> RecordReader(InputSplit split, JobConf job, Reporter reporter) : For reading the input split.

From: John Hancock [[EMAIL PROTECTED]]
Sent: Thursday, May 17, 2012 3:40 PM
Subject: custom FileInputFormat class


Can anyone on the list point me in the right direction as to how to write
my own FileInputFormat class?

Perhaps this is not even the way I should go, but my goal is to write a
MapReduce job that gets its input from a binary file of integers and longs.