Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Hadoop efficient resource isolation


Copy link to this message
-
Re: Hadoop efficient resource isolation
CapacityScheduler has features to allow a user to specify the amount of virtual memory per map/reduce task and the TaskTracker monitors all tasks and their process-trees to ensure fork-bombs don't kill the node.

On Feb 25, 2013, at 8:27 PM, Marcin Mejran wrote:

> That won't stop a bad job (say a fork bomb or a massive memory leak in a streaming script) from taking out a node which is what I believe Dhanasekaran was asking about. He wants to physically isolate certain lobs to certain "non critical" nodes. I don't believe this is possible and data would be spread to those nodes, assuming they're data nodes, which would still cause cluster wide issues (and if data is isolate why not have two separate clusters?),
>
> I've read references in the docs about some type of memory based contrains in Hadoop but I don't know of the details. Anyone know how they work?
>
> Also, I believe there are tools in Linux that can kill processes in case of memory issues and otherwise restrict what a certain user can do. These seem like a more flexible solution although they won't cover all potential issues.
>
> -Marcin
>
> On Feb 25, 2013, at 7:20 PM, "Arun C Murthy" <[EMAIL PROTECTED]> wrote:
>
>> CapacityScheduler is what you want...
>>
>> On Feb 21, 2013, at 5:16 AM, Dhanasekaran Anbalagan wrote:
>>
>>> Hi Guys,
>>>
>>> It's possible isolation job submission for hadoop cluster, we currently running 48 machine cluster. we  monitor Hadoop is not provides efficient resource isolation. In my case we ran for tech and research pool, When tech job some memory leak will haven, It's occupy the hole cluster.  Finally we figure out  issue with tech job. It's  screwed up hole hadoop cluster. finally 10 data node  are dead.
>>>
>>> Any prevention of job submission efficient way resource allocation. When something wrong in   particular job, effect particular pool, Not effect others job. Any way to archive this
>>>
>>> Please guide me guys.
>>>
>>> My idea is, When tech user submit job means only apply job in for my case submit 24 machine. other machine only for research user.
>>>
>>> It's will prevent the memory leak problem.
>>>  
>>>
>>> -Dhanasekaran.
>>> Did I learn something today? If not, I wasted it.
>>
>> --
>> Arun C. Murthy
>> Hortonworks Inc.
>> http://hortonworks.com/
>>
>>

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB