Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Re: Distributing the code to multiple nodes


Copy link to this message
-
Re: Distributing the code to multiple nodes
Hi Sudhakar,

Indeed there was a type the complete command is as follows except the main
class since my manifest has the entry for main class.
/hadoop jar wordCount.jar  /opt/ApacheHadoop/temp/worker.log
/opt/ApacheHadoop/out/

Next I killed the datanode in 10.12.11.210 and l see the following messages
in the log files. Looks like the namenode is still trying to assign the
complete task to one single node and since it does not find the complete
data set in one node it is complaining.

2014-01-15 16:38:26,894 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1-DEV05:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>
2014-01-15 16:38:27,348 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1dev-211:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>
2014-01-15 16:38:27,871 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1-dev06:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>
2014-01-15 16:38:27,897 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1-DEV05:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>
2014-01-15 16:38:28,349 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1dev-211:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>
2014-01-15 16:38:28,874 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1-dev06:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>
2014-01-15 16:38:28,900 WARN
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Node : l1-DEV05:1004 does not have sufficient resource for request :
{Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1,
Location: *, Relax Locality: true} node total capability : <memory:1024,
vCores:8>
--Ashish
On Wed, Jan 15, 2014 at 3:59 PM, sudhakara st <[EMAIL PROTECTED]>wrote:

> Hello Ashish
>
>
> 2) Run the example again using the command
> ./hadoop dfs wordCount.jar /opt/ApacheHadoop/temp/worker.log
> /opt/ApacheHadoop/out/
>
>
> Unless if it typo mistake the command should be
> ./hadoop jar wordCount.jar WordCount /opt/ApacheHadoop/temp/worker.log
> /opt/ApacheHadoop/out/
>
> One more thing try , just stop datanode process in  10.12.11.210 and run
> the job
>
>
>
>
> On Wed, Jan 15, 2014 at 2:07 PM, Ashish Jain <[EMAIL PROTECTED]> wrote:
>
>> Hello Sudhakara,
>>
>> Thanks for your suggestion. However once I change the mapreduce framework
>> to yarn my map reduce jobs does not get executed at all. It seems it is
>> waiting on some thread indefinitely. Here is what I have done
>>
>> 1) Set the mapreduce framework to yarn in mapred-site.xml
>> <property>
>>  <name>mapreduce.framework.name</name>
>>  <value>yarn</value>
>> </property>
>> 2) Run the example again using the command
>> ./hadoop dfs wordCount.jar /opt/ApacheHadoop/temp/worker.log
>> /opt/ApacheHadoop/out/
>>
>> The jobs are just stuck and do not move further.
>>
>>
>> I also tried the following and it complains of filenotfound exception and