hsy541@...> 2013-12-04, 21:48
Joe Stein 2013-12-04, 21:59
hsy541@...> 2013-12-04, 22:17
Jason Rosenberg 2013-12-05, 03:21
Joe Stein 2013-12-05, 14:44
I expect the vast majority of kafka users are using java for client apps,
and in most cases, they will just use the default jars they get in the
I think the confusing thing here, is that there is a single jar file, that
includes all the broker, consumer and producer client code. The 'broker'
is really just some scripts wrapped around launching the jvm with this jar
file. If you download the binary, you also have scripts that launch
consumers and clients, using that same jar, just passing different command
So, really, there are not separate artifacts to speak of, from a broker,
consumer, producer stand point. Good news, I understand there are plans
in the works to have separate jars for the broker, consumer clients,
producer clients, in 0.9.x.
Thanks for the info around 2.8.2 and beyond. However, I think we'll try to
stay using 2.8.0 for the broker (since that seems to be the stable version
used at LinkedIn). And in our env, we use a single artifact (via maven)
for all our broker and client apps.....
I don't think the issue in KAFKA-1163 is likely an upload issue (since when
I run 'sbt make-pom' locally, it builds it the same way as the broken one
uploaded to the public repo).
At the very least, I should think you'd want to remove the currently broken
one from the public repo, since from a maven standpoint, it's not
usable.....Perhaps a simple patch that only fixes the pom issue (call it
On Thu, Dec 5, 2013 at 9:43 AM, Joe Stein <[EMAIL PROTECTED]> wrote:
> << Heh, that sounds amazing, considering that's the binary release
> version you've put up for download :)
> Not really. The Kafka broker compiled in 2.8.0 Scala is a result of "that
> is how it runs at LinkedIn" originally (and still I think) and the how it
> got run everywhere else as a result. So the majority of installs from
> inception has been 2.8.0 Kafka brokers... Now... for applications producing
> and consuming (at least for Scala shops) 2.8.0 is just not what you want...
> I switched from 2.8.0 to 2.9.0-1 almost immediately because 2.9.0-1 had
> lots of features that I wanted in my business domain application (which had
> nothing to-do with Kafka) and eventually to 2.10 (again, for language
> features in the business domain application having nothing to-do with
> Kafka). So, would Kafka benefit from compiling the broker in something
> other than 2.8.0? Maybe if we wanted to use language features in the broker
> or such otherwise it is proven to be stable so why add risk to something by
> changing it (especially without testing and proving it in production for
> the community) if it is working just fine... so, long story short it is
> about risk mitigation from a production perspective.
> if you are in a Java environment then it would be best to use 2.8.2 (or
> even move forward) for the producer/consumer to avoid bugs between things
> you might be doing in your code and the producers and consumers. The
> broker is 100% self contained so if it was going to hit any Scala bugs from
> 2.8.0 it would have done so already in the last few years.
> << Anyway, it seems this pom issue ought to be solvable, no?
> it is very possible it was an upload issue during the release and running
> it again would fix. I will update the release notes and write some code to
> use to test every single version next time on staging before the vote...
> and that others can use during the vote which David Arthur has now also
> created too for ant/ivy
> Hope things makes sense and helps to clear things up. There has been talk
> in 0.8.1 to start to move things forward from a Scala perspective.
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> On Wed, Dec 4, 2013 at 10:20 PM, Jason Rosenberg <[EMAIL PROTECTED]> wrote: