Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - dfs.block.size


Copy link to this message
-
Re: dfs.block.size
madhu phatak 2012-02-28, 08:42
You can use FileSystem.getFileStatus(Path p) which gives you the block size
specific to a file.

On Tue, Feb 28, 2012 at 2:50 AM, Kai Voigt <[EMAIL PROTECTED]> wrote:

> "hadoop fsck <filename> -blocks" is something that I think of quickly.
>
> http://hadoop.apache.org/common/docs/current/commands_manual.html#fsckhas more details
>
> Kai
>
> Am 28.02.2012 um 02:30 schrieb Mohit Anchlia:
>
> > How do I verify the block size of a given file? Is there a command?
> >
> > On Mon, Feb 27, 2012 at 7:59 AM, Joey Echeverria <[EMAIL PROTECTED]>
> wrote:
> >
> >> dfs.block.size can be set per job.
> >>
> >> mapred.tasktracker.map.tasks.maximum is per tasktracker.
> >>
> >> -Joey
> >>
> >> On Mon, Feb 27, 2012 at 10:19 AM, Mohit Anchlia <[EMAIL PROTECTED]
> >
> >> wrote:
> >>> Can someone please suggest if parameters like dfs.block.size,
> >>> mapred.tasktracker.map.tasks.maximum are only cluster wide settings or
> >> can
> >>> these be set per client job configuration?
> >>>
> >>> On Sat, Feb 25, 2012 at 5:43 PM, Mohit Anchlia <[EMAIL PROTECTED]
> >>> wrote:
> >>>
> >>>> If I want to change the block size then can I use Configuration in
> >>>> mapreduce job and set it when writing to the sequence file or does it
> >> need
> >>>> to be cluster wide setting in .xml files?
> >>>>
> >>>> Also, is there a way to check the block of a given file?
> >>>>
> >>
> >>
> >>
> >> --
> >> Joseph Echeverria
> >> Cloudera, Inc.
> >> 443.305.9434
> >>
>
> --
> Kai Voigt
> [EMAIL PROTECTED]
>
>
>
>
>
--
Join me at http://hadoopworkshop.eventbrite.com/