Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> issues with decrease the default.block.size


+
YouPeng Yang 2013-05-09, 15:42
+
Harsh J 2013-05-10, 05:27
+
yypvsxf19870706 2013-05-10, 14:59
+
Harsh J 2013-05-10, 15:24
Copy link to this message
-
Re: issues with decrease the default.block.size
The block size is for allocation not storage on the disk.

*Thanks & Regards    *


Shashwat Shriparv

On Fri, May 10, 2013 at 8:54 PM, Harsh J <[EMAIL PROTECTED]> wrote:

> Thanks. I failed to add: It should be okay to do if those cases are
> true and the cluster seems under-utilized right now.
>
> On Fri, May 10, 2013 at 8:29 PM, yypvsxf19870706
> <[EMAIL PROTECTED]> wrote:
> > Hi harsh
> >
> > Yep.
> >
> >
> >
> > Regards
> >
> >
> >
> >
> >
> >
> > 发自我的 iPhone
> >
> > 在 2013-5-10,13:27,Harsh J <[EMAIL PROTECTED]> 写道:
> >
> >> Are you looking to decrease it to get more parallel map tasks out of
> >> the small files? Are you currently CPU bound on processing these small
> >> files?
> >>
> >> On Thu, May 9, 2013 at 9:12 PM, YouPeng Yang <[EMAIL PROTECTED]>
> wrote:
> >>> hi ALL
> >>>
> >>>     I am going to setup a new hadoop  environment, .Because  of  there
>  are
> >>> lots of small  files, I would  like to change  the  default.block.size
> to
> >>> 16MB
> >>> other than adopting the ways to merge  the files into large  enough
> (e.g
> >>> using  sequencefiles).
> >>>    I want to ask are  there  any bad influences or issues?
> >>>
> >>> Regards
> >>
> >>
> >>
> >> --
> >> Harsh J
>
>
>
> --
> Harsh J
>