Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> FILE_BYTES_READ counter missing for HBase mapreduce job


Copy link to this message
-
Re: FILE_BYTES_READ counter missing for HBase mapreduce job
Addition info:
The mapreduce job I run is a map-only job. It does not have reducers and it
write data directly to hdfs in the mapper.
 Could this be the reason why there's no value for file_bytes_read?
 If so, is there any easy way to get the total input data size?

 Thanks
Haijia
On Thu, Sep 5, 2013 at 2:46 PM, Haijia Zhou <[EMAIL PROTECTED]> wrote:

> Hi,
>  Basically I have a mapreduce job to scan a hbase table and do some
> processing. After the job finishes, I only got three filesystem counters:
> HDFS_BYTES_READ, HDFS_BYTES_WRITTEN and FILE_BYTES_WRITTEN.
>  The value of HDFS_BYTES_READ is not very useful here because it shows the
> size of the .META file, not the size of input records.
>  I am looking for counter FILE_BYTES_READ but somehow it's missing in the
> job status report.
>
>  Does anyone know what I might miss here?
>
>  Thanks
> Haijia
>
> P.S. The job status report
>  FileSystemCounters
> HDFS_BYTES_READ
>    340,124      0           340,124
> FILE_BYTES_WRITTEN
> 190,431,329      0       190,431,329
> HDFS_BYTES_WRITTEN
> 272,538,467,123      0   272,538,467,123
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB