Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> mapred.min.split.size


Copy link to this message
-
Re: mapred.min.split.size
I am using the ChukwaStorage loader from chukwa 0.3.  Is it the loader's responsibility to deal with input splits?

On Aug 5, 2010, at 4:14 PM, Richard Ding wrote:

> I misunderstood your earlier question. If you have one large file, set mapred.min.split.size property will help to increase the file split size. Pig will pass system properties to Hadoop. What loader are you using?
>
> Thanks,
> -Richard
>
> -----Original Message-----
> From: Corbin Hoenes [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 05, 2010 1:22 PM
> To: [EMAIL PROTECTED]
> Subject: Re: mapred.min.split.size
>
> So what does pig do when I have a 5 gig file?  Does it simply hardcode the split size to block size?   Is there no way to tell it to just operate on a larger split size?
>
>
> On Jul 27, 2010, at 3:41 PM, Richard Ding wrote:
>
>> For Pig loaders, each split can have at most one file, doesn't matter what split size is.
>>
>> You can concatenate the input files before loading them.
>>
>> Thanks,
>> -Richard
>> -----Original Message-----
>> From: Corbin Hoenes [mailto:[EMAIL PROTECTED]]
>> Sent: Tuesday, July 27, 2010 2:09 PM
>> To: [EMAIL PROTECTED]
>> Subject: mapred.min.split.size
>>
>> Is there a way to set the mapred.min.split.size property in pig? I set it but doesn't seem to have changed the mapper's HDFS_BYTES_READ counter.  My mappers are finishing ~10 secs.  I have ~20,000 of them.
>>
>>
>>
>