dev
   loginfo
 first of all,writing deletedelta file will create an empty file and then do write,flush,close,and exception during write,flush,close would cause an empty file (refer to :org.apache.carbondata.core.writer.CarbonDeleteDeltaWriterImpl#write(org.apache.carbondata.core.mutate.DeleteDeltaBlockDetails))
1.as for a and b,we add logs and exception happends during close.
WARN DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/ip_crm/public/offer_prod_inst_rel_cab/Fact/Part0/Segment_0/part-8-4_batchno0-0-1518490201583.deletedelta (inode 1306621743): File does not exist. Holder DFSClient_NONMAPREDUCE_-754557169_117 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3439)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3242)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3080)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789)
...
2.for c,yes you can give us your jar   spark 2.1.1+hadoop2.7.2     mail [EMAIL PROTECTED]
yixu2001
 
From: Liang Chen
Date: 2018-03-20 22:06
To: dev
Subject: Re: Getting [Problem in loading segment blocks] error after doing multi update operations
Hi
 
Thanks for your feedback.
Let me first reproduce this issue, and check the detail.
 
Regards
Liang
 
 
yixu2001 wrote
 
 
 
 
 
Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB