The mapreduce job I run is a map-only job. It does not have reducers and it
write data directly to hdfs in the mapper.
Could this be the reason why there's no value for file_bytes_read?
If so, is there any easy way to get the total input data size?
On Thu, Sep 5, 2013 at 2:46 PM, Haijia Zhou <[EMAIL PROTECTED]> wrote:
> Basically I have a mapreduce job to scan a hbase table and do some
> processing. After the job finishes, I only got three filesystem counters:
> HDFS_BYTES_READ, HDFS_BYTES_WRITTEN and FILE_BYTES_WRITTEN.
> The value of HDFS_BYTES_READ is not very useful here because it shows the
> size of the .META file, not the size of input records.
> I am looking for counter FILE_BYTES_READ but somehow it's missing in the
> job status report.
> Does anyone know what I might miss here?
> P.S. The job status report
> 340,124 0 340,124
> 190,431,329 0 190,431,329
> 272,538,467,123 0 272,538,467,123