Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> FSDataOutputStream hangs in out.close()


Copy link to this message
-
FSDataOutputStream hangs in out.close()
Hi,

I'm using the Hadoop 1.0.4 API to try to submit a job in a remote
JobTracker. I created modfied the JobClient to submit the same job in
different JTs. E.g, the JobClient is in my PC and it try to submit the same
Job  in 2 JTs at different sites in Amazon EC2. When I'm launching the Job,
in the setup phase, the JobClient is trying to submit split file info into
the remote JT.  This is the method of the JobClient that I've the problem:
  public static void createSplitFiles(Path jobSubmitDir,
      Configuration conf, FileSystem   fs,
      org.apache.hadoop.mapred.InputSplit[] splits)
  throws IOException {
    FSDataOutputStream out = createFile(fs,
        JobSubmissionFiles.getJobSplitFile(jobSubmitDir), conf);
    SplitMetaInfo[] info = writeOldSplits(splits, out, conf);
    out.close();

writeJobSplitMetaInfo(fs,JobSubmissionFiles.getJobSplitMetaFile(jobSubmitDir),
        new FsPermission(JobSubmissionFiles.JOB_FILE_PERMISSION),
splitVersion,
        info);
  }

1 - The FSDataOutputStream hangs in the out.close() instruction. Why it
hangs? What should I do to solve this?
--
Best regards,
+
Harsh J 2013-03-27, 12:24
+
Pedro Sá da Costa 2013-03-27, 15:53
+
Pedro Sá da Costa 2013-03-27, 16:04
+
Harsh J 2013-03-27, 17:55
+
Pedro Sá da Costa 2013-03-27, 21:32
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB