Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> nodes with different memory sizes


Copy link to this message
-
Re: nodes with different memory sizes
Hi,

You mentioned you'd like to configure different memory settings for
the process depending on which nodes the tasks run on. Which process
are you referring to here - the Hadoop daemons, or your map/reduce
program ?

An alternative approach could be to see if you can get only those
nodes in Torque that satisfy a specific memory criteria. I remember
Torque has a way of filtering the nodes allocated by the memory
capacity requested. And you can pass this request to torque using the
resource_manager.options variable in HOD. Refer to the documentation
in http://hadoop.apache.org/common/docs/r0.20.2/hod_config_guide.html#3.2+hod+options.

Thanks
Hemanth

On Sat, Oct 9, 2010 at 12:11 AM, Boyu Zhang <[EMAIL PROTECTED]> wrote:
> Hi Pablo,
>
> thank you for the reply. Actually I forgot to mention that I am using HOD to
> provision a hadoop and hdfs on the cluster. There is only one configuration
> file when I tried to allocate the cluster. And every time the hadoop cluster
> is up, which nodes it uses is different and handled by torque. Any idea how
> HOD can be configured like that? Thank you very much!
>
> Boyu
>
> On Fri, Oct 8, 2010 at 12:27 PM, Pablo Cingolani <[EMAIL PROTECTED]>wrote:
>
>> I think you can change that in your "conf/mapred-site.xml", since it's
>> a site specific config
>> file (see: http://hadoop.apache.org/common/docs/current/cluster_setup.html
>> )
>>
>> e.g.:
>>    <property> <name>mapred.child.java.opts</name><value>-Xmx8G</value>
>> </property>
>>
>> I hope this helps
>> Yours
>>     Pablo Cingolani
>>
>>
>>
>> On Fri, Oct 8, 2010 at 12:17 PM, Boyu Zhang <[EMAIL PROTECTED]> wrote:
>> > Dear All,
>> >
>> > I am trying to run a memory hungry program in a cluster with 6 nodes,
>> among
>> > the 6 nodes, 2 of them have 32 G memory, and the rest have 16 G memory. I
>> am
>> > wondering is there a way of configuring the cluster so that the process
>> run
>> > in the big nodes have more memory while the process run in the smaller
>> node
>> > use smaller memory.
>> >
>> > I have been trying to find parameters I can use in the hadoop
>> configuration
>> > but it seems that the configuration has to be the same in all the nodes.
>> If
>> > this is the case, the best I can do is configure the java process to the
>> > smaller memory. Any help is appreciated, thanks!
>> >
>> > Boyu
>> >
>>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB