Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - Crash when run two jobs at the same time with same Hbase table


+
GuoWei 2013-03-26, 06:12
+
Jean-Marc Spaggiari 2013-03-26, 13:18
+
GuoWei 2013-03-27, 01:14
Copy link to this message
-
Re: Crash when run two jobs at the same time with same Hbase table
ramkrishna vasudevan 2013-03-27, 02:58
Interesting.  Need to check this.
May be we should configure different names for the local output directory
for each job.  By any chance both jobs are writing to the same path?

Regards
Ram

On Wed, Mar 27, 2013 at 6:44 AM, GuoWei <[EMAIL PROTECTED]> wrote:

> Dear JM,
>
> It's correct.
>
> The Hbase version is 0.94.2 and the hadoop version is 0.20.2 / 1.04。
>
> We test this on both hadoop version 0.20.2 and 1.04.
>
> The error still there.
>
> Thanks a lot
>
>
>
> Best Regards / 商祺
> 郭伟 Guo Wei
> -----------------------------------------------------
> 南京西桥科技有限公司
> Western Bridge Tech Ltd.,  Nanjing
>
> 南京市玄武区花园路8号一号楼511
> No. 511, Building 1, No. 8, Hua Yuan Road
>
> Xuanwu District, Nanjing, PR China
>
> Email: [EMAIL PROTECTED]
>
> Tel: +86 25 8528 4900 (Operator)
> Mobile: +86 138 1589 8257
> Fax: +86 25 8528 4980
>
> Weibo: http://weibo.com/guowee
> Web: http://www.wbkit.com
> -----------------------------------------------------
> WesternBridge Tech: Professional software service provider. Professional
> is MANNER as well CAPABILITY.
>
> On 2013-3-26, at 下午9:18, Jean-Marc Spaggiari <[EMAIL PROTECTED]>
> wrote:
>
> > Hi,
> >
> > So basically, you have one job which is reading  from A and writing to
> > B, and one wich is reading from A and writing to C, and  the 2 jobs
> > are running at the same time. Is that correct? Are you able to
> > reproduce that each time you are running the job? Which HBased and
> > Hadoop versions are you running?
> >
> > JM
> >
> > 2013/3/26 GuoWei <[EMAIL PROTECTED]>:
> >> Dear,
> >>
> >> When I run two MR Jobs which will read same Hbase table and write to
> another same Hbase table at the same time. The result is one job successful
> finished. And another job crashed. And The following shows the error log.
> >>
> >> Please help me to find out why ?
> >>
> >>
> >> <2013-03-25 15:50:34,026> <INFO > <org.apache.hadoop.mapred.JobClient>
> -  map 0% reduce 0%(JobClient.java:monitorAndPrintJob:1301)
> >> <2013-03-25 15:50:36,096> <WARN > <org.apache.hadoop.mapred.Task> -
> Could not find output size (Task.java:calculateOutputSize:948)
> >> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
> output/file.out in any of the configured local directories
> >>        at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
> >>        at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
> >>        at
> org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
> >>        at
> org.apache.hadoop.mapred.Task.calculateOutputSize(Task.java:944)
> >>        at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:924)
> >>        at org.apache.hadoop.mapred.Task.done(Task.java:875)
> >>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:374)
> >>        at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
> >> <2013-03-25 15:50:36,100> <INFO >
> <org.apache.hadoop.mapred.LocalJobRunner> -
> (LocalJobRunner.java:statusUpdate:321)
> >> <2013-03-25 15:50:36,102> <INFO > <org.apache.hadoop.mapred.Task> -
> Task 'attempt_local_0001_m_000000_0' done.(Task.java:sendDone:959)
> >> <2013-03-25 15:50:36,111> <WARN >
> <org.apache.hadoop.mapred.FileOutputCommitter> - Output path is null in
> cleanup(FileOutputCommitter.java:cleanupJob:100)
> >> <2013-03-25 15:50:36,111> <WARN >
> <org.apache.hadoop.mapred.LocalJobRunner> -
> job_local_0001(LocalJobRunner.java:run:298)
> >> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
> output/file.out in any of the configured local directories
> >>        at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
> >>        at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
> >>        at
> org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
> >>        at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:236)
+
GuoWei 2013-03-27, 03:11