Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Kafka >> mail # dev >> Review Request 15674: Reassign partitions should delete the old replicas from disk


Copy link to this message
-
Re: Review Request 15674: Reassign partitions should delete the old replicas from disk


> On Nov. 19, 2013, 6:40 p.m., Jay Kreps wrote:
> > What happens if I am doing a read or write concurrently with a delete?
> >
> > Would it be simpler just to have the delete log work like the segment delete where rather than trying to lock we remove it from the segment list and then just enqueue a delete in 60 seconds. My concern is just that reasoning about the various locking strategies in the log is getting increasingly difficult.
>
> Jun Rao wrote:
>     Yes, we could try deleting the log asynchronously. The issues there are:
>    
>     1. The same partition could be moved back to this broker during the delayed window.
>     2. It's not clear if 60 secs (or any value) is good enough since the time that an ongoing scheduled flush takes is unbounded.
>    
>     The following is how this patch handles outstanding reads/writes on the deleted data.
>    
>     1. All read operations are ok since we already handle unexpected exceptions in KafkaApi. The caller will get an error.
>     2. Currently, if we hit an IOException while writing to the log by the producer request, the replica fetcher or the log flusher, we halt the broker. We need to make sure that the deletion of a log doesn't cause the halt. This is achieved by preventing those operations on the log once it's deleted.
>     2.1 For producer requests, the delete partition operation will synchronize on the leaderAndIsrUpdate lock.
>     2.2 For replica fetcher, this is already handled since the fetcher is removed before the log is deleted.
>     2.3 For log flusher, the flush and the delete will now synchronize on a delete lock.
>    
>     I agree that this approach uses more locks, which potentially makes the code harder to understand. However, my feeling is that this is probably a less hacky approach than the async delete one.

At least until the various locks are cleaned up, the current approach used in the patch seems safer compared to an async delete. Will take a closer look at the patch sometime today.
- Neha
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15674/#review29123
-----------------------------------------------------------
On Nov. 19, 2013, 4:28 p.m., Jun Rao wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15674/
> -----------------------------------------------------------
>
> (Updated Nov. 19, 2013, 4:28 p.m.)
>
>
> Review request for kafka.
>
>
> Bugs: KAFKA-1074
>     https://issues.apache.org/jira/browse/KAFKA-1074
>
>
> Repository: kafka
>
>
> Description
> -------
>
> kafka-1074; fix 3
>
>
> kafka-1074; fix 2
>
>
> kafka-1074
>
>
> Diffs
> -----
>
>   core/src/main/scala/kafka/cluster/Partition.scala 02ccc17c79b6d44c75f9bb6ca7cda8c51ae6f6fb
>   core/src/main/scala/kafka/log/Log.scala 1883a53de112ad08449dc73a2ca08208c11a2537
>   core/src/main/scala/kafka/log/LogManager.scala 81be88aa618ed5614703d45a0556b77c97290085
>   core/src/main/scala/kafka/log/LogSegment.scala 0d6926ea105a99c9ff2cfc9ea6440f2f2d37bde8
>   core/src/main/scala/kafka/server/ReplicaManager.scala 161f58134f20f9335dbd2bee6ac3f71897cbef7c
>   core/src/test/scala/unit/kafka/admin/AdminTest.scala c30069e837e54fb91bf1d5b75b133282a28dedf8
>
> Diff: https://reviews.apache.org/r/15674/diff/
>
>
> Testing
> -------
>
>
> Thanks,
>
> Jun Rao
>
>