Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Submitting a job to a remote cluster


Copy link to this message
-
Re: Submitting a job to a remote cluster
Hi,

Could you please share your setup details - i.e. how many slaves, how many
datanodes and tasktrackers. Also, the configuration - in particular
hdfs-site.xml ?

To answer your question: the datanode address is picked up from
hdfs-site.xml, or hdfs-default.xml from the property dfs.datanode.address.
This is generally left as default value, unless you want to change the port
number and things will work fine.

Thanks
Hemanth

On Fri, Oct 5, 2012 at 1:28 AM, Oleg Zhurakousky <[EMAIL PROTECTED]
> wrote:

> Trying to submit a Job to a remote Hadoop instance. Everything seem to
> start fine but then I am seeing this:
>
> 2012-10-04 15:56:32,617 INFO [org.apache.hadoop.mapred.JobClient] - < map
> 0% reduce 0%>
>
> 2012-10-04 15:56:32,621 INFO [org.apache.hadoop.mapred.MapTask] - <data
> buffer = 79691776/99614720>
>
> 2012-10-04 15:56:32,621 INFO [org.apache.hadoop.mapred.MapTask] - <record
> buffer = 262144/327680>
>
> 2012-10-04 15:56:32,641 WARN [org.apache.hadoop.hdfs.DFSClient] - <Failed
> to connect to /127.0.0.1:50010, add to deadNodes and
> continuejava.net.ConnectException: Connection refused>
>
>
> How can I specify datanode address to use?
>
> Thanks
>
> Oleg
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB