-Re: Submitting a job to a remote cluster
Hemanth Yamijala 2012-10-05, 04:08
Could you please share your setup details - i.e. how many slaves, how many
datanodes and tasktrackers. Also, the configuration - in particular
To answer your question: the datanode address is picked up from
hdfs-site.xml, or hdfs-default.xml from the property dfs.datanode.address.
This is generally left as default value, unless you want to change the port
number and things will work fine.
On Fri, Oct 5, 2012 at 1:28 AM, Oleg Zhurakousky <[EMAIL PROTECTED]
> Trying to submit a Job to a remote Hadoop instance. Everything seem to
> start fine but then I am seeing this:
> 2012-10-04 15:56:32,617 INFO [org.apache.hadoop.mapred.JobClient] - < map
> 0% reduce 0%>
> 2012-10-04 15:56:32,621 INFO [org.apache.hadoop.mapred.MapTask] - <data
> buffer = 79691776/99614720>
> 2012-10-04 15:56:32,621 INFO [org.apache.hadoop.mapred.MapTask] - <record
> buffer = 262144/327680>
> 2012-10-04 15:56:32,641 WARN [org.apache.hadoop.hdfs.DFSClient] - <Failed
> to connect to /127.0.0.1:50010, add to deadNodes and
> continuejava.net.ConnectException: Connection refused>
> How can I specify datanode address to use?