Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Killing hadoop jobs automatically


Copy link to this message
-
Re: Killing hadoop jobs automatically
Hi,

Every Map/Reduce app has a Reporter, You can set the configuration
parameter {mapred.task.timeout} of  Reporter to your desired value.

Good Luck.

On 01/30/2012 04:14 PM, praveenesh kumar wrote:
> Yeah, I am aware of that, but it needs you to explicity monitor the job and
> look for jobid and then hadoop job -kill command.
> What I want to know - "Is there anyway to do all this automatically by
> providing some timer or something -- that if my job is taking more than
> some predefined time, it would get killed automatically
>
> Thanks,
> Praveenesh
>
> On Mon, Jan 30, 2012 at 12:38 PM, Prashant Kommireddi
> <[EMAIL PROTECTED]>wrote:
>
>> You might want to take a look at the kill command : "hadoop job -kill
>> <jobid>".
>>
>> Prashant
>>
>> On Sun, Jan 29, 2012 at 11:06 PM, praveenesh kumar<[EMAIL PROTECTED]
>>> wrote:
>>> Is there anyway through which we can kill hadoop jobs that are taking
>>> enough time to execute ?
>>>
>>> What I want to achieve is - If some job is running more than
>>> "_some_predefined_timeout_limit", it should be killed automatically.
>>>
>>> Is it possible to achieve this, through shell scripts or any other way ?
>>>
>>> Thanks,
>>> Praveenesh
>>>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB