Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Kafka, mail # user - slow organic migration to 0.8


+
Jason Rosenberg 2013-05-02, 03:44
Copy link to this message
-
Re: slow organic migration to 0.8
Neha Narkhede 2013-05-02, 04:54
Jason,

During the migration, the only thing to watch out for is that the producers
of a particular topic don't upgrade to 0.8 before the consumers do so. You
can let applications upgrade when they can to respect the above
requirement. If there are fewer applications producing to and consuming
from any particular topic, you can group together those and push them at
roughly the same time.

Thanks,
Neha
On May 1, 2013 8:44 PM, "Jason Rosenberg" <[EMAIL PROTECTED]> wrote:

> So, we have lots of apps producing messages to our kafka 0.7.2 instances
> (and multiple consumers of the data).
>
> We are not going to be able to follow the suggested migration path, where
> we first migrate all data, then move all producers to use 0.8, etc.
> Instead, many apps are on their own release cycle, and we need to allow
> them to upgrade their kafka libraries as part of their regular release
> schedule.
>
> Is there a procedure I'm not seeing, or am I right in thinking I'll need to
> maintain duplicate kafka clusters (and consumers) for a time.  Or can we
> have a real-time data migration consumer always running continuously
> against the 0.7.2 kafka store, and have all the data ultimately end up in
> 0.8.  Eventually, the data going to 0.7.2 will dwindle to nothing, but it
> could take a while.
>
> So, I'm thinking I'll just need to maintain dual sets of kafka servers for
> a while.  Since this won't result in any increase in load/disk space, etc.,
> I was thinking of allowing instances to remain multi-tennant with each
> other (e.g. kafka 0.7.2 and kafka 0.8 on the same box, using separate
> ports, separate log storage directories (but shared disks)).  Is this ok,
> or a terrible idea?  I expect the transition to take several weeks.
>
> Thanks,
>
> Jason
>

 
+
Jason Rosenberg 2013-05-02, 05:23