Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> dfs.block.size


Copy link to this message
-
Re: dfs.block.size
How do I verify the block size of a given file? Is there a command?

On Mon, Feb 27, 2012 at 7:59 AM, Joey Echeverria <[EMAIL PROTECTED]> wrote:

> dfs.block.size can be set per job.
>
> mapred.tasktracker.map.tasks.maximum is per tasktracker.
>
> -Joey
>
> On Mon, Feb 27, 2012 at 10:19 AM, Mohit Anchlia <[EMAIL PROTECTED]>
> wrote:
> > Can someone please suggest if parameters like dfs.block.size,
> > mapred.tasktracker.map.tasks.maximum are only cluster wide settings or
> can
> > these be set per client job configuration?
> >
> > On Sat, Feb 25, 2012 at 5:43 PM, Mohit Anchlia <[EMAIL PROTECTED]
> >wrote:
> >
> >> If I want to change the block size then can I use Configuration in
> >> mapreduce job and set it when writing to the sequence file or does it
> need
> >> to be cluster wide setting in .xml files?
> >>
> >> Also, is there a way to check the block of a given file?
> >>
>
>
>
> --
> Joseph Echeverria
> Cloudera, Inc.
> 443.305.9434
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB