Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> dfs.block.size

Copy link to this message
Re: dfs.block.size
Can someone please suggest if parameters like dfs.block.size,
mapred.tasktracker.map.tasks.maximum are only cluster wide settings or can
these be set per client job configuration?

On Sat, Feb 25, 2012 at 5:43 PM, Mohit Anchlia <[EMAIL PROTECTED]>wrote:

> If I want to change the block size then can I use Configuration in
> mapreduce job and set it when writing to the sequence file or does it need
> to be cluster wide setting in .xml files?
> Also, is there a way to check the block of a given file?