Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Hadoop I/O buffer size


Copy link to this message
-
Hadoop I/O buffer size
Hi guys,

   Need help figuring out Hadoop I/O buffer size. This link:
http://developer.yahoo.com/blogs/hadoop/posts/2009/08/the_anatomy_of_hadoop_io_pipel/,
implies that
*io.file.buffersize* is 4K or 64K which makes me wonder. Is that the buffer
which gets filled with records from disk when I have the following in code:

*While( reader.next(Key,Value)) ;  ? *

If yes, when I change io.file.buffersize = 64M will that fetch 64MB of
records into memory when I do "reader.next" ?

if no, then how many records are fetched when I do one
"reader.next(key,value)" ?

Thank you,
Mark
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB