Yang 2012-07-12, 03:46
Harsh J 2012-07-12, 05:05
-Re: can't disable speculative execution?
Yang 2012-07-12, 05:07
yes, let me try that
changing the max mapper slot actually requires changing the hadoop config,
since I just found that
it's "final" param
On Wed, Jul 11, 2012 at 10:05 PM, Harsh J <[EMAIL PROTECTED]> wrote:
> Your problem is more from the fact that you are running > 1 map slot
> per TT, and multiple mappers are getting run at the same time, all
> trying to bind to the same port. Limit your TT's max map tasks to 1
> when you're relying on such techniques to debug, or use the
> LocalJobRunner/Apache MRUnit instead.
> On Thu, Jul 12, 2012 at 9:16 AM, Yang <[EMAIL PROTECTED]> wrote:
> > I set the following params to be false in my pig script (0.10.0)
> > SET mapred.map.tasks.speculative.execution false;
> > SET mapred.reduce.tasks.speculative.execution false;
> > I also verified in the jobtracker UI in the job.xml that they are indeed
> > set correctly.
> > when the job finished, jobtracker UI shows that there is only one attempt
> > for each task (in fact I have only 1 task too).
> > but when I went to the tasktracker node, looked under the
> > /var/log/hadoop/userlogs/job_id_here/
> > dir , there are 3 attempts dir ,
> > job_201207111710_0024 # ls
> > attempt_201207111710_0024_m_000000_0
> > attempt_201207111710_0024_m_000002_0 job-acls.xml
> > so 3 attempts were indeed fired ??
> > I have to get this controlled correctly because I'm trying to debug the
> > mappers through eclipse,
> > but if more than 1 mapper process is fired, they all try to connect to
> > same debugger port, and the end result is that nobody is able to
> > hook to the debugger.
> > Thanks
> > Yang
> Harsh J
Harsh J 2012-07-12, 05:00
Yang 2012-07-12, 05:06
Harsh J 2012-07-12, 05:14
Harsh J 2012-07-12, 05:15
Yang 2012-07-12, 06:39