Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> TaskStatus Exception using HFileOutputFormat


Copy link to this message
-
Re: TaskStatus Exception using HFileOutputFormat
Using the below construct, do you still get exception ?

Please consider upgrading to hadoop 1.0.4

Thanks

On Tue, Feb 5, 2013 at 4:55 PM, Sean McNamara
<[EMAIL PROTECTED]>wrote:

>  > an you tell us the HBase and hadoop versions you were using ?
>
>  Ahh yes, sorry I left that out:
>
>  Hadoop: 1.0.3
> HBase: 0.92.0
>
>
>  > I guess you have used the above construct
>
>
>  Our code is as follows:
>  HTable table = new HTable(conf, configHBaseTable);
> FileOutputFormat.setOutputPath(job, outputDir);
> HFileOutputFormat.configureIncrementalLoad(job, table);
>
>
>  Thanks!
>
>   From: Ted Yu <[EMAIL PROTECTED]>
> Reply-To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Date: Tuesday, February 5, 2013 5:46 PM
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Subject: Re: TaskStatus Exception using HFileOutputFormat
>
>   Can you tell us the HBase and hadoop versions you were using ?
> From TestHFileOutputFormat:
>
>     HFileOutputFormat.configureIncrementalLoad(job, table);
>
>     FileOutputFormat.setOutputPath(job, outDir);
> I guess you have used the above construct ?
>
>  Cheers
>
> On Tue, Feb 5, 2013 at 4:31 PM, Sean McNamara <[EMAIL PROTECTED]
> > wrote:
>
>>
>>  We're trying to use HFileOutputFormat for bulk hbase loading.   When
>> using HFileOutputFormat's setOutputPath or configureIncrementalLoad, the
>> job is unable to run.  The error I see in the jobtracker logs is: Trying to
>> set finish time for task attempt_201301030046_123198_m_000002_0 when no
>> start time is set, stackTrace is : java.lang.Exception
>>
>>  If I remove an references to HFileOutputFormat, and
>> use FileOutputFormat.setOutputPath, things seem to run great.  Does anyone
>> know what could be causing the TaskStatus error when
>> using HFileOutputFormat?
>>
>>  Thanks,
>>
>>  Sean
>>
>>
>>  What I see on the Job Tracker:
>>
>>  2013-02-06 00:17:33,685 ERROR org.apache.hadoop.mapred.TaskStatus:
>> Trying to set finish time for task attempt_201301030046_123198_m_000002_0
>> when no start time is set, stackTrace is : java.lang.Exception
>>         at
>> org.apache.hadoop.mapred.TaskStatus.setFinishTime(TaskStatus.java:145)
>>         at
>> org.apache.hadoop.mapred.TaskInProgress.incompleteSubTask(TaskInProgress.java:670)
>>         at
>> org.apache.hadoop.mapred.JobInProgress.failedTask(JobInProgress.java:2945)
>>         at
>> org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:1162)
>>         at
>> org.apache.hadoop.mapred.JobTracker.updateTaskStatuses(JobTracker.java:4739)
>>         at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3683)
>>         at
>> org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:3378)
>>         at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>>
>>
>>  What I see from the console:
>>
>>  391  [main] INFO  org.apache.hadoop.hbase.mapreduce.HFileOutputFormat
>>  - Looking up current regions for table
>> org.apache.hadoop.hbase.client.HTable@3a083b1b
>> 1284 [main] INFO  org.apache.hadoop.hbase.mapreduce.HFileOutputFormat  -
>> Configuring 41 reduce partitions to match current region count
>> 1285 [main] INFO  org.apache.hadoop.hbase.mapreduce.HFileOutputFormat  -
>> Writing partition information to
>> file:/opt/webtrends/oozie/jobs/Lab/O/VisitorAnalytics.MapReduce/bin/partitions_1360109875112
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB