Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig, mail # user - How can I split the data with more reducers?


Copy link to this message
-
Re: How can I split the data with more reducers?
Dmitriy Ryaboy 2012-09-17, 05:01
Ok, then it's not POSplit that's holding the memory -- it does not
participate in any of the reduce stages, according the the plan you
attached.

To set parallelism, you can hardcode it on every operation that causes
an MR boundary, with the exception of "group all"  and "limit" since
those by definition require a single reducer. So, you can alter your
script to explicitly request parallelism to be greater than what is
estimated: "join .. parallel $P", "group by .. parallel $P", "order
... parallel $P", etc.

I would recommend two things:
1) Make sure you are running the latest trunk, and have enabled
in-memory aggregation ( set pig.exec.mapPartAgg true; set
pig.exec.mapPartAgg.minReduction 3 ). I just made some significant
improvements to Distinct's Initial phase (not requiring it to register
with SpillableMemoryManager at all), and also improved in-mem
aggregation performance.

2) It seems like you are doing a lot of "group, distinct the group,
count" type operations. If you do have a distinct group that is very
large, loading it all into ram is bound to cause problems. When the
size of distinct sets is expected to be fairly high, we usually
recommend a different pattern for count(distinct x):

Instead of :
results = foreach (group data by country) {
  distinct_ids = distinct data.id;
  generate group as country, COUNT(distinct_ids) as num_dist,
COUNT(data) as total;
}

Do the following:

results_per_id = foreach (group data by (country, id))
  generate flatten(group) as (country, id), COUNT(data) as num_repeats;
results = foreach (group results_per_id by country)
  generate group as country, COUNT(results_per_id) as num_dist,
SUM(results_per_id.num_repeats) as total;

This will introduce an extra MR step, but it's much more scalable when
you get into millions of distincts in a single dimension.

D

On Sun, Sep 16, 2012 at 2:18 AM, Haitao Yao <[EMAIL PROTECTED]> wrote:
> The map output of the first MR job is over 500MB, and only 1 reducer processes it. So OutOfMemoryError is caused.
>
> After set the child memory to 1GB, the first job succeeded. But most of our jobs does not need that much memory. 512MB is enough if I can set the reducer to more than 1.
>
>
>
>
> Haitao Yao
> [EMAIL PROTECTED]
> weibo: @haitao_yao
> Skype:  haitao.yao.final
>
> On 2012-9-16, at 下午5:05, Haitao Yao wrote:
>
>> here's the explain result compressed.(The apache mail server does not allow big attachments.)
>> <explain.tar.gz>
>>
>>
>> Haitao Yao
>> [EMAIL PROTECTED]
>> weibo: @haitao_yao
>> Skype:  haitao.yao.final
>>
>> On 2012-9-16, at 下午4:41, Dmitriy Ryaboy wrote:
>>
>>> Still would like to see the script or the explain plan..
>>>
>>> D
>>>
>>> On Sat, Sep 15, 2012 at 7:50 PM, Haitao Yao <[EMAIL PROTECTED]> wrote:
>>>> No, I also thought it is a mapper , but It surely is a reducer. all the mappers succeeded and the reducer failed.
>>>>
>>>>
>>>>
>>>> Haitao Yao
>>>> [EMAIL PROTECTED]
>>>> weibo: @haitao_yao
>>>> Skype:  haitao.yao.final
>>>>
>>>> On 2012-9-16, at 上午10:08, Haitao Yao wrote:
>>>>
>>>>> Hi,
>>>>>      I 'v encountered a problem: the job failed because of POSplit retained too much memory in the reducer. How can I specify more reducers for the spill?
>>>>>
>>>>>      Here's the screen snapshot of the Heap dump.
>>>>>      <aa.jpg>
>>>>>
>>>>>
>>>>> And here's the snippet of my split script:
>>>>>
>>>>>      split RawData into AURawData if type == 2, NURawData if type == 1, InRawData if type == 9, GCData if type == 61, HCData if type == 71, TutorialRawData if type == 3 or t    ype == 15;
>>>>>
>>>>> There's 3 similar split clause in my script. The reducer count is always 1. How can I increase it?
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>>
>>>>> Haitao Yao
>>>>> [EMAIL PROTECTED]
>>>>> weibo: @haitao_yao
>>>>> Skype:  haitao.yao.final
>>>>>
>>>>
>>
>