Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> FSDataOutputStream hangs in out.close()

Copy link to this message
FSDataOutputStream hangs in out.close()

I'm using the Hadoop 1.0.4 API to try to submit a job in a remote
JobTracker. I created modfied the JobClient to submit the same job in
different JTs. E.g, the JobClient is in my PC and it try to submit the same
Job  in 2 JTs at different sites in Amazon EC2. When I'm launching the Job,
in the setup phase, the JobClient is trying to submit split file info into
the remote JT.  This is the method of the JobClient that I've the problem:
  public static void createSplitFiles(Path jobSubmitDir,
      Configuration conf, FileSystem   fs,
      org.apache.hadoop.mapred.InputSplit[] splits)
  throws IOException {
    FSDataOutputStream out = createFile(fs,
        JobSubmissionFiles.getJobSplitFile(jobSubmitDir), conf);
    SplitMetaInfo[] info = writeOldSplits(splits, out, conf);

        new FsPermission(JobSubmissionFiles.JOB_FILE_PERMISSION),

1 - The FSDataOutputStream hangs in the out.close() instruction. Why it
hangs? What should I do to solve this?
Best regards,