Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: TaskStatus Exception using HFileOutputFormat


Copy link to this message
-
Re: TaskStatus Exception using HFileOutputFormat
Thanks for this information. Here is related code:

  public static void configureIncrementalLoad(Job job, HTable table)

  throws IOException {

    Configuration conf = job.getConfiguration();

...

    Path partitionsPath = new Path(job.getWorkingDirectory(),

                                   "partitions_" + UUID.randomUUID());

    LOG.info("Writing partition information to " + partitionsPath);

    FileSystem fs = partitionsPath.getFileSystem(conf);

    writePartitions(conf, partitionsPath, startKeys);

    partitionsPath.makeQualified(fs);

Can you check whether hdfs related config was passed to Job correctly ?

Thanks

On Wed, Feb 6, 2013 at 1:15 PM, Sean McNamara
<[EMAIL PROTECTED]>wrote:

>  Ok, a bit more info-  From what I can tell is that the partitions file
> is being placed into the working dir on the node I launch from, and the
> task trackers are trying to look for that file, which doesn't exist where
> they run (since they are on other nodes.)
>
>
>  Here is the exception on the TT in case it is helpful:
>
>
>   2013-02-06 17:05:13,002 WARN org.apache.hadoop.mapred.TaskTracker:
> Exception while localization java.io.FileNotFoundException: File
> /opt/jobs/MyMapreduceJob/partitions_1360170306728 does not exist.
>          at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
>          at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
>          at
> org.apache.hadoop.filecache.TaskDistributedCacheManager.setupCache(TaskDistributedCacheManager.java:179)
>          at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1212)
>          at java.security.AccessController.doPrivileged(Native Method)
>          at javax.security.auth.Subject.doAs(Subject.java:396)
>          at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>          at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>          at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>          at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>          at java.lang.Thread.run(Thread.java:662)
>
>   From: Sean McNamara <[EMAIL PROTECTED]>
> Reply-To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Date: Wednesday, February 6, 2013 9:35 AM
>
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Subject: Re: TaskStatus Exception using HFileOutputFormat
>
>   > Using the below construct, do you still get exception ?
>
>  Correct, I am still getting this exception.
>
>  Sean
>
>   From: Ted Yu <[EMAIL PROTECTED]>
> Reply-To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Date: Tuesday, February 5, 2013 7:50 PM
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Subject: Re: TaskStatus Exception using HFileOutputFormat
>
>  Using the below construct, do you still get exception ?
>
>  Please consider upgrading to hadoop 1.0.4
>
>  Thanks
>
> On Tue, Feb 5, 2013 at 4:55 PM, Sean McNamara <[EMAIL PROTECTED]
> > wrote:
>
>>  > an you tell us the HBase and hadoop versions you were using ?
>>
>>  Ahh yes, sorry I left that out:
>>
>>  Hadoop: 1.0.3
>> HBase: 0.92.0
>>
>>
>>  > I guess you have used the above construct
>>
>>
>>  Our code is as follows:
>>  HTable table = new HTable(conf, configHBaseTable);
>> FileOutputFormat.setOutputPath(job, outputDir);
>> HFileOutputFormat.configureIncrementalLoad(job, table);
>>
>>
>>  Thanks!
>>
>>   From: Ted Yu <[EMAIL PROTECTED]>
>> Reply-To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
>> Date: Tuesday, February 5, 2013 5:46 PM
>> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
>> Subject: Re: TaskStatus Exception using HFileOutputFormat
>>
>>   Can you tell us the HBase and hadoop versions you were using ?
>> From TestHFileOutputFormat:
>>
>>     HFileOutputFormat.configureIncrementalLoad(job, table);
>>
>>     FileOutputFormat.setOutputPath(job, outDir);
>> I guess you have used the above construct ?
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB