dev
environment  spark.2.1.1 carbondata 1.1.1  hadoop 2.7.2
add  debug information

Block B-tree loading faile

why CarbonUtil.calculateMetaSize    Calculation results getBlockLength=0 getBlockOffset=8301549 ?

Caused by: org.apache.carbondata.core.datastore.exception.IndexBuilderException: Invalid carbon data file: hdfs://ns1/user/e_carbon/public/carbon.store/e_carbon/prod_inst_his1023c/Fact/Part0/Segment_1.1/part-0-172_batchno0-0-1508833127408.carbondata :getBlockLength=0 getBlockOffset=8301549 requiredMetaSize=-8301549 isV1=false getVersion=ColumnarFormatV3
1 debug information
scala> cc.sql("select prod_inst_id,count(*) from e_carbon.prod_inst_his1023c group by prod_inst_id having count(*)>1").show
[Stage 0:=============================>                        (157 + 50) / 283]17/10/30 10:39:24 WARN scheduler.TaskSetManager: Lost task 252.0 in stage 0.0 (TID 201, HDD010, executor 22): org.apache.carbondata.core.datastore.exception.IndexBuilderException: Block B-tree loading failed
at org.apache.carbondata.core.datastore.BlockIndexStore.fillLoadedBlocks(BlockIndexStore.java:264)
at org.apache.carbondata.core.datastore.BlockIndexStore.getAll(BlockIndexStore.java:189)
at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.initQuery(AbstractQueryExecutor.java:131)
at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfos(AbstractQueryExecutor.java:186)
at org.apache.carbondata.core.scan.executor.impl.VectorDetailQueryExecutor.execute(VectorDetailQueryExecutor.java:36)
at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.initialize(VectorizedCarbonRecordReader.java:112)
at org.apache.carbondata.spark.rdd.CarbonScanRDD.compute(CarbonScanRDD.scala:204)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: org.apache.carbondata.core.datastore.exception.IndexBuilderException: Invalid carbon data file: hdfs://ns1/user/e_carbon/public/carbon.store/e_carbon/prod_inst_his1023c/Fact/Part0/Segment_1.1/part-0-172_batchno0-0-1508833127408.carbondata getBlockLength=0 getBlockOffset=8301549 requiredMetaSize=-8301549  getVersion=ColumnarFormatV3
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.carbondata.core.datastore.BlockIndexStore.fillLoadedBlocks(BlockIndexStore.java:254)
... 21 more
Caused by: org.apache.carbondata.core.datastore.exception.IndexBuilderException: Invalid carbon data file: hdfs://ns1/user/e_carbon/public/carbon.store/e_carbon/prod_inst_his1023c/Fact/Part0/Segment_1.1/part-0-172_batchno0-0-1508833127408.carbondata=lianch:getBlockLength=0 getBlockOffset=8301549 requiredMetaSize=-8301549 isV1=false getVersion=ColumnarFormatV3
at org.apache.carbondata.core.datastore.AbstractBlockIndexStoreCache.checkAndLoadTableBlocks(AbstractBlockIndexStoreCache.java:116)
at org.apache.carbondata.core.datastore.BlockIndexStore.loadBlock(BlockIndexStore.java:304)
at org.apache.carbondata.core.datastore.BlockIndexStore.get(BlockIndexStore.java:109)
at org.apache.carbondata.core.datastore.BlockIndexStore$BlockLoaderThread.call(BlockIndexStore.java:294)
at org.apache.carbondata.core.datastore.BlockIndexStore$BlockLoaderThread.call(BlockIndexStore.java:284)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more

[Stage 0:==========================================>           (223 + 50) / 283]17/10/30 10:39:26 ERROR scheduler.TaskSetManager: Task 252 in stage 0.0 failed 10 times; aborting job
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 61.0 in stage 0.0 (TID 184, HDD012, executor 7): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 71.0 in stage 0.0 (TID 212, HDD008, executor 18): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 27.0 in stage 0.0 (TID 83, HDD007, executor 8): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 94.0 in stage 0.0 (TID 250, HDD014, executor 24): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN spark.ExecutorAllocationManager: No stages are running, but numRunningTasks != 0
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 49.0 in stage 0.0 (TID 219, HDD010, executor 22): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 92.0 in stage 0.0 (TID 222, HDD008, executor 26): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 99.0 in stage 0.0 (TID 200, HDD009, executor 13): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 97.0 in stage 0.0 (TID 115, HDD010, executor 22): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 216.0 in stage 0.0 (TID 281, HDD009, executor 13): TaskKilled (killed intentionally)
17/10/30 10:39:26 WARN scheduler.TaskSetManager: Lost task 90.0 in stage 0.0 (TID 220, HDD008, exe
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB