在 2013-5-10，13:27，Harsh J <[EMAIL PROTECTED]> 写道：
> Are you looking to decrease it to get more parallel map tasks out of
> the small files? Are you currently CPU bound on processing these small
> On Thu, May 9, 2013 at 9:12 PM, YouPeng Yang <[EMAIL PROTECTED]> wrote:
>> hi ALL
>> I am going to setup a new hadoop environment, .Because of there are
>> lots of small files, I would like to change the default.block.size to
>> other than adopting the ways to merge the files into large enough (e.g
>> using sequencefiles).
>> I want to ask are there any bad influences or issues?
> Harsh J