Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Distributing the code to multiple nodes


Copy link to this message
-
Re: Distributing the code to multiple nodes
Thanks for all these suggestions. Somehow I do not have access to the
servers today and will try the suggestions made on monday and will let you
know how it goes.

--Ashish
On Thu, Jan 9, 2014 at 7:53 PM, German Florez-Larrahondo <
[EMAIL PROTECTED]> wrote:

> Ashish
>
> Could this be related to the scheduler you are using and its settings?.
>
>
>
> On lab environments when running a single type of job I often use
> FairScheduler (the YARN default in 2.2.0 is CapacityScheduler) and it does
> a good job distributing the load.
>
>
>
> You could give that a try (
> https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html
> )
>
>
>
> I think just changing yarn-site.xml  as follows could demonstrate this
> theory (note that  how the jobs are scheduled depend on resources such as
> memory on the nodes and you would need to setup yarn-site.xml accordingly).
>
>
>
> <property>
>
>   <name>yarn.resourcemanager.scheduler.class</name>
>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>
> </property>
>
>
>
> Regards
>
> ./g
>
>
>
>
>
> *From:* Ashish Jain [mailto:[EMAIL PROTECTED]]
> *Sent:* Thursday, January 09, 2014 6:46 AM
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Distributing the code to multiple nodes
>
>
>
> Another point to add here 10.12.11.210 is the host which has everything
> running including a slave datanode. Data was also distributed this host as
> well as the jar file. Following are running on 10.12.11.210
>
> 7966 DataNode
> 8480 NodeManager
> 8353 ResourceManager
> 8141 SecondaryNameNode
> 7834 NameNode
>
>
>
> On Thu, Jan 9, 2014 at 6:12 PM, Ashish Jain <[EMAIL PROTECTED]> wrote:
>
> Logs were updated only when I copied the data. After copying the data
> there has been no updates on the log files.
>
>
>
> On Thu, Jan 9, 2014 at 5:08 PM, Chris Mawata <[EMAIL PROTECTED]>
> wrote:
>
> Do the logs on the three nodes contain anything interesting?
> Chris
>
> On Jan 9, 2014 3:47 AM, "Ashish Jain" <[EMAIL PROTECTED]> wrote:
>
> Here is the block info for the record I distributed. As can be seen only
> 10.12.11.210 has all the data and this is the node which is serving all the
> request. Replicas are available with 209 as well as 210
>
> 1073741857:         10.12.11.210:50010    View Block Info
> 10.12.11.209:50010    View Block Info
> 1073741858:         10.12.11.210:50010    View Block Info
> 10.12.11.211:50010    View Block Info
> 1073741859:         10.12.11.210:50010    View Block Info
> 10.12.11.209:50010    View Block Info
> 1073741860:         10.12.11.210:50010    View Block Info
> 10.12.11.211:50010    View Block Info
> 1073741861:         10.12.11.210:50010    View Block Info
> 10.12.11.209:50010    View Block Info
> 1073741862:         10.12.11.210:50010    View Block Info
> 10.12.11.209:50010    View Block Info
> 1073741863:         10.12.11.210:50010    View Block Info
> 10.12.11.209:50010    View Block Info
> 1073741864:         10.12.11.210:50010    View Block Info
> 10.12.11.209:50010    View Block Info
>
> --Ashish
>
>
>
> On Thu, Jan 9, 2014 at 2:11 PM, Ashish Jain <[EMAIL PROTECTED]> wrote:
>
> Hello Chris,
>
> I have now a cluster with 3 nodes and replication factor being 2. When I
> distribute a file I could see that there are replica of data available in
> other nodes. However when I run a map reduce job again only one node is
> serving all the request :(. Can you or anyone please provide some more
> inputs.
>
> Thanks
> Ashish
>
>
>
> On Wed, Jan 8, 2014 at 7:16 PM, Chris Mawata <[EMAIL PROTECTED]>
> wrote:
>
> 2 nodes and replication factor of 2 results in a replica of each block
> present on each node. This would allow the possibility that a single node
> would do the work and yet be data local.  It will probably happen if that
> single node has the needed capacity.  More nodes than the replication
> factor are needed to force distribution of the processing.
> Chris
>
> On Jan 8, 2014 7:35 AM, "Ashish Jain" <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB