Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Crash when run two jobs at the same time with same Hbase table


Copy link to this message
-
Re: Crash when run two jobs at the same time with same Hbase table
Dear JM,

It's correct.

The Hbase version is 0.94.2 and the hadoop version is 0.20.2 / 1.04。

We test this on both hadoop version 0.20.2 and 1.04.

The error still there.

Thanks a lot

Best Regards / 商祺
郭伟 Guo Wei
-----------------------------------------------------
南京西桥科技有限公司
Western Bridge Tech Ltd.,  Nanjing

南京市玄武区花园路8号一号楼511
No. 511, Building 1, No. 8, Hua Yuan Road

Xuanwu District, Nanjing, PR China

Email: [EMAIL PROTECTED]

Tel: +86 25 8528 4900 (Operator)
Mobile: +86 138 1589 8257
Fax: +86 25 8528 4980

Weibo: http://weibo.com/guowee
Web: http://www.wbkit.com
-----------------------------------------------------
WesternBridge Tech: Professional software service provider. Professional is MANNER as well CAPABILITY.

On 2013-3-26, at 下午9:18, Jean-Marc Spaggiari <[EMAIL PROTECTED]> wrote:

> Hi,
>
> So basically, you have one job which is reading  from A and writing to
> B, and one wich is reading from A and writing to C, and  the 2 jobs
> are running at the same time. Is that correct? Are you able to
> reproduce that each time you are running the job? Which HBased and
> Hadoop versions are you running?
>
> JM
>
> 2013/3/26 GuoWei <[EMAIL PROTECTED]>:
>> Dear,
>>
>> When I run two MR Jobs which will read same Hbase table and write to another same Hbase table at the same time. The result is one job successful finished. And another job crashed. And The following shows the error log.
>>
>> Please help me to find out why ?
>>
>>
>> <2013-03-25 15:50:34,026> <INFO > <org.apache.hadoop.mapred.JobClient> -  map 0% reduce 0%(JobClient.java:monitorAndPrintJob:1301)
>> <2013-03-25 15:50:36,096> <WARN > <org.apache.hadoop.mapred.Task> - Could not find output size (Task.java:calculateOutputSize:948)
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find output/file.out in any of the configured local directories
>>        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
>>        at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
>>        at org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
>>        at org.apache.hadoop.mapred.Task.calculateOutputSize(Task.java:944)
>>        at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:924)
>>        at org.apache.hadoop.mapred.Task.done(Task.java:875)
>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:374)
>>        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
>> <2013-03-25 15:50:36,100> <INFO > <org.apache.hadoop.mapred.LocalJobRunner> - (LocalJobRunner.java:statusUpdate:321)
>> <2013-03-25 15:50:36,102> <INFO > <org.apache.hadoop.mapred.Task> - Task 'attempt_local_0001_m_000000_0' done.(Task.java:sendDone:959)
>> <2013-03-25 15:50:36,111> <WARN > <org.apache.hadoop.mapred.FileOutputCommitter> - Output path is null in cleanup(FileOutputCommitter.java:cleanupJob:100)
>> <2013-03-25 15:50:36,111> <WARN > <org.apache.hadoop.mapred.LocalJobRunner> - job_local_0001(LocalJobRunner.java:run:298)
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find output/file.out in any of the configured local directories
>>        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
>>        at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
>>        at org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
>>        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:236)
>> <2013-03-25 15:50:37,029> <INFO > <org.apache.hadoop.mapred.JobClient> -  map 100% reduce 0%(JobClient.java:monitorAndPrintJob:1301)
>> <2013-03-25 15:50:37,030> <INFO > <org.apache.hadoop.mapred.JobClient> - Job complete: job_local_0001(JobClient.java:monitorAndPrintJob:1356)
>> <2013-03-25 15:50:37,031> <INFO > <org.apache.hadoop.mapred.JobClient> - Counters: 15(Counters.java:log:585)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB