first of all,writing deletedelta file will create an empty file and then do write,flush,close,and exception during write,flush,close would cause an empty file (refer to :org.apache.carbondata.core.writer.CarbonDeleteDeltaWriterImpl#write(org.apache.carbondata.core.mutate.DeleteDeltaBlockDetails)) for a and b,we add logs and exception happends during close.
WARN DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/ip_crm/public/offer_prod_inst_rel_cab/Fact/Part0/Segment_0/part-8-4_batchno0-0-1518490201583.deletedelta (inode 1306621743): File does not exist. Holder DFSClient_NONMAPREDUCE_-754557169_117 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(
2.for c,yes you can give us your jar   spark 2.1.1+hadoop2.7.2     mail [EMAIL PROTECTED]
From: Liang Chen
Date: 2018-03-20 22:06
To: dev
Subject: Re: Getting [Problem in loading segment blocks] error after doing multi update operations
Thanks for your feedback.
Let me first reproduce this issue, and check the detail.
yixu2001 wrote
Sent from:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB