Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Distributing the code to multiple nodes


Copy link to this message
-
Re: Distributing the code to multiple nodes
I just now tried it again and I see following messages popping up in the
log file:

2014-01-15 19:37:38,221 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1dev-211:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>
2014-01-15 19:37:38,621 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1-dev06:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>

Do I need to increase the RAM allocation to slave nodes??

On Wed, Jan 15, 2014 at 7:07 PM, Ashish Jain <[EMAIL PROTECTED]> wrote:

> I tried that but somehow my map reduce jobs do not execute at all once I
> set it to yarn
>
>
> On Wed, Jan 15, 2014 at 7:00 PM, Nirmal Kumar <[EMAIL PROTECTED]>wrote:
>
>>  Surely you don’t have to set **mapreduce.jobtracker.address** in
>> mapred-site.xml
>>
>>
>>
>> In mapred-site.xml you just have to mention:
>>
>> <property>
>>
>> <name>mapreduce.framework.name</name>
>>
>> <value>yarn</value>
>>
>> </property>
>>
>>
>>
>> -Nirmal
>>
>> *From:* Ashish Jain [mailto:[EMAIL PROTECTED]]
>> *Sent:* Wednesday, January 15, 2014 6:44 PM
>>
>> *To:* [EMAIL PROTECTED]
>> *Subject:* Re: Distributing the code to multiple nodes
>>
>>
>>
>> I think this is the problem. I have not set
>> "mapreduce.jobtracker.address" in my mapred-site.xml and by default it is
>> set to local. Now the question is how to set it up to remote. Documentation
>> says I need to specify the host:port of the job tracker for this. As we
>> know hadoop 2.2.0 is completely overhauled and there is no concept of task
>> tracker and job tracker. Instead there is now resource manager and node
>> manager. So in this case what do I set as "mapreduce.jobtracker.address".
>> Do I set is resourceMangerHost:resourceMangerPort?
>>
>> --Ashish
>>
>>
>>
>> On Wed, Jan 15, 2014 at 4:20 PM, Ashish Jain <[EMAIL PROTECTED]> wrote:
>>
>>  Hi Sudhakar,
>>
>> Indeed there was a type the complete command is as follows except the
>> main class since my manifest has the entry for main class.
>> /hadoop jar wordCount.jar  /opt/ApacheHadoop/temp/worker.log
>> /opt/ApacheHadoop/out/
>>
>> Next I killed the datanode in 10.12.11.210 and l see the following
>> messages in the log files. Looks like the namenode is still trying to
>> assign the complete task to one single node and since it does not find the
>> complete data set in one node it is complaining.
>>
>>
>> 2014-01-15 16:38:26,894 WARN
>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
>> Node : l1-DEV05:1004 does not have sufficient resource for request :
>> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
>> Location: *, Relax Locality: true} node total capability : <memory:1024,
>> vCores:8>
>> 2014-01-15 16:38:27,348 WARN
>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
>> Node : l1dev-211:1004 does not have sufficient resource for request :
>> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
>> Location: *, Relax Locality: true} node total capability : <memory:1024,
>> vCores:8>
>> 2014-01-15 16:38:27,871 WARN
>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
>> Node : l1-dev06:1004 does not have sufficient resource for request :
>> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
>> Location: *, Relax Locality: true} node total capability : <memory:1024,
>> vCores:8>
>> 2014-01-15 16:38:27,897 WARN
>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
>> Node : l1-DEV05:1004 does not have sufficient resource for request :
>> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,