Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> issues with decrease the default.block.size


Copy link to this message
-
Re: issues with decrease the default.block.size
Thanks. I failed to add: It should be okay to do if those cases are
true and the cluster seems under-utilized right now.

On Fri, May 10, 2013 at 8:29 PM, yypvsxf19870706
<[EMAIL PROTECTED]> wrote:
> Hi harsh
>
> Yep.
>
>
>
> Regards
>
>
>
>
>
>
> 发自我的 iPhone
>
> 在 2013-5-10,13:27,Harsh J <[EMAIL PROTECTED]> 写道:
>
>> Are you looking to decrease it to get more parallel map tasks out of
>> the small files? Are you currently CPU bound on processing these small
>> files?
>>
>> On Thu, May 9, 2013 at 9:12 PM, YouPeng Yang <[EMAIL PROTECTED]> wrote:
>>> hi ALL
>>>
>>>     I am going to setup a new hadoop  environment, .Because  of  there  are
>>> lots of small  files, I would  like to change  the  default.block.size to
>>> 16MB
>>> other than adopting the ways to merge  the files into large  enough (e.g
>>> using  sequencefiles).
>>>    I want to ask are  there  any bad influences or issues?
>>>
>>> Regards
>>
>>
>>
>> --
>> Harsh J

--
Harsh J