Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Re: Distributing the code to multiple nodes


Copy link to this message
-
Re: Distributing the code to multiple nodes
Ashish Jain 2014-01-15, 13:13
I think this is the problem. I have not set "mapreduce.jobtracker.address"
in my mapred-site.xml and by default it is set to local. Now the question
is how to set it up to remote. Documentation says I need to specify the
host:port of the job tracker for this. As we know hadoop 2.2.0 is
completely overhauled and there is no concept of task tracker and job
tracker. Instead there is now resource manager and node manager. So in this
case what do I set as "mapreduce.jobtracker.address". Do I set is
resourceMangerHost:resourceMangerPort?

--Ashish
On Wed, Jan 15, 2014 at 4:20 PM, Ashish Jain <[EMAIL PROTECTED]> wrote:

> Hi Sudhakar,
>
> Indeed there was a type the complete command is as follows except the main
> class since my manifest has the entry for main class.
> /hadoop jar wordCount.jar  /opt/ApacheHadoop/temp/worker.log
> /opt/ApacheHadoop/out/
>
> Next I killed the datanode in 10.12.11.210 and l see the following
> messages in the log files. Looks like the namenode is still trying to
> assign the complete task to one single node and since it does not find the
> complete data set in one node it is complaining.
>
> 2014-01-15 16:38:26,894 WARN
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> Node : l1-DEV05:1004 does not have sufficient resource for request :
> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
> Location: *, Relax Locality: true} node total capability : <memory:1024,
> vCores:8>
> 2014-01-15 16:38:27,348 WARN
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> Node : l1dev-211:1004 does not have sufficient resource for request :
> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
> Location: *, Relax Locality: true} node total capability : <memory:1024,
> vCores:8>
> 2014-01-15 16:38:27,871 WARN
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> Node : l1-dev06:1004 does not have sufficient resource for request :
> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
> Location: *, Relax Locality: true} node total capability : <memory:1024,
> vCores:8>
> 2014-01-15 16:38:27,897 WARN
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> Node : l1-DEV05:1004 does not have sufficient resource for request :
> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
> Location: *, Relax Locality: true} node total capability : <memory:1024,
> vCores:8>
> 2014-01-15 16:38:28,349 WARN
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> Node : l1dev-211:1004 does not have sufficient resource for request :
> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
> Location: *, Relax Locality: true} node total capability : <memory:1024,
> vCores:8>
> 2014-01-15 16:38:28,874 WARN
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> Node : l1-dev06:1004 does not have sufficient resource for request :
> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
> Location: *, Relax Locality: true} node total capability : <memory:1024,
> vCores:8>
> 2014-01-15 16:38:28,900 WARN
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> Node : l1-DEV05:1004 does not have sufficient resource for request :
> {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
> Location: *, Relax Locality: true} node total capability : <memory:1024,
> vCores:8>
>
>
> --Ashish
>
>
> On Wed, Jan 15, 2014 at 3:59 PM, sudhakara st <[EMAIL PROTECTED]>wrote:
>
>> Hello Ashish
>>
>>
>> 2) Run the example again using the command
>> ./hadoop dfs wordCount.jar /opt/ApacheHadoop/temp/worker.log
>> /opt/ApacheHadoop/out/
>>
>>
>> Unless if it typo mistake the command should be
>> ./hadoop jar wordCount.jar WordCount /opt/ApacheHadoop/temp/worker.log
>> /opt/ApacheHadoop/out/
>>
>> One more thing try , just stop datanode process in  10.12.11.210 and run
>> the job