Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka >> mail # user >> Partition data for deleted topic found in kafka-logs, also, found leader: -1


+
Yogesh Sangvikar 2013-06-18, 13:26
+
Yogesh Sangvikar 2013-06-19, 04:42
+
Jun Rao 2013-06-19, 15:17
Copy link to this message
-
Re: Partition data for deleted topic found in kafka-logs, also, found leader: -1
Is there a way to manually do this?  Or is there a way to manually change
the replication factor for a topic?  Possibly by manipulating zookeeper
data directly?

Jason
On Wed, Jun 19, 2013 at 8:17 AM, Jun Rao <[EMAIL PROTECTED]> wrote:

> Yogesh,
>
> Delete topic is not supported in 0.8 beta yet. We decided to take this
> feature out in 0.8 beta to make the release available sooner.
>
> Thanks,
>
> Jun
>
>
> On Tue, Jun 18, 2013 at 6:25 AM, Yogesh Sangvikar <
> [EMAIL PROTECTED]> wrote:
>
> > Hi Team,
> >
> > I am exploring kafka 0.8 beta release to understand data flow,
> replication
> > features.
> > While testing i found that, the partition data for data for deleted topic
> > is preseved in kafka-logs, why this behavior? suppose below case,
> >
> >  A topic (suppose test1) is created with partition 6 and replication 3
> on a
> > system with 4 brokers, respective log and index files will be prepared
> per
> > partition in kafka-logs. If I delete the topic and recreate the same
> topic
> > ‘test1’ after some time with partition 2 and replication 2. The
> kafka-logs
> > directory seems to be confusing to understand why the partitions for
> > previous topic are present. *Please help to understand this scenario*.
> >
> > Also, while testing the replication and leader selection feature observed
> > leader -1 status,
> >
> > Original status:
> > topic: test1    partition: 0    leader: 4       replicas: 4,2,3 isr:
> 4,2,3
> > topic: test1    partition: 1    leader: 0       replicas: 0,3,4 isr:
> 0,3,4
> > topic: test1    partition: 2    leader: 1       replicas: 1,4,0 isr:
> 1,4,0
> > if leader 4 goes down:
> > topic: test1    partition: 0    leader: 2       replicas: 4,2,3 isr: 2,3
> > topic: test1    partition: 1    leader: 0       replicas: 0,3,4 isr:
> 0,3,4
> > topic: test1    partition: 2    leader: 1       replicas: 1,4,0 isr:
> 1,0,4
> >
> > if leader 2  goes down:
> > topic: test1    partition: 0    leader: 3       replicas: 4,2,3 isr: 3
> > topic: test1    partition: 1    leader: 0       replicas: 0,3,4 isr:
> 0,3,4
> > topic: test1    partition: 2    leader: 1       replicas: 1,4,0 isr:
> 1,0,4
> >
> > if again leader 3 goes down:
> > topic: test1    partition: 0    leader: -1      replicas: 4,2,3 isr:
> > topic: test1    partition: 1    leader: 0       replicas: 0,3,4 isr: 0,4
> > topic: test1    partition: 2    leader: 1       replicas: 1,4,0 isr:
> 1,0,4
> >
> > As per kafka protocol guide, *leader: -1 means If no leader exists
> because
> > we are in the middle of a leader election this id will be -1.*
> >
> > *Does it mean that, the data from partition 0 will be unavailable due no
> > leader (leader selection in progress)?*
> > As per my understanding, can we have auto re-balancer facility to
> > re-balance the partition replications to available brokers if one of the
> > broker is down, as in above case of (if leader 4 goes down), we can
> > replicate the partition 0 to broker 0/1 to re-balance the replication.
> >
> > Please correct me for any wrong understanding as those are my initial
> > observations.
> >
> > Thanks in advance.
> >
> > Thanks,
> > Yogesh Sangvikar
> >
>

 
+
Jun Rao 2013-06-20, 04:02
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB