I think you will need to implement a RecordReader/InputFormat of your own
for this and use it with a LoadFunc. Not sure if Hadoop has a Reader that
you could re-use for this.
How do you handle the case when a file exceeds block size?
On Thu, Jun 21, 2012 at 2:34 PM, Jonathan Coveney <[EMAIL PROTECTED]>wrote:
> It can even be a bytearray. Basically I have a bunch of files, and I want
> one file -> one row. Is there an easy way to do this? Or will I need to
> provide a special fileinputformat etc?