There are several important metrics in choosing a RPC framework, include: performance, multi-language support, version compatibility, usability and product maturity.
PB almost plays well in all aspects, so I think that may be the reason why community choose it.
----- Original Message -----
From: "Ted Dunning" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Sent: Wednesday, January 9, 2013 3:27:36 PM
Subject: Re: Question about protocol buffer RPC
Avro and Thrift both work well for RPC implementations.
I have lately been using protobufs with protobuf-rpc-pro and have been very
happy with it. It has much of the debuggability of Thrift, but with
On Tue, Jan 8, 2013 at 8:44 PM, Hangjun Ye <[EMAIL PROTECTED]> wrote:
> Our project is facing similar problem: choosing a PRC framework.
> So I want to know if there are any drawbacks in Avro/Thrift and then Hadoop
> doesn't use them.
> Would appreciate if any insights could be shared for this!
> 2013/1/9 Hangjun Ye <[EMAIL PROTECTED]>
> > Hi there,
> > Looks Hadoop is using Google's protocol buffer for its RPC (correct me if
> > I'm wrong).
> > Avro/Thrift do the same thing, support more language, and have a complete
> > PRC implementation. Seems Google's protocol buffer PRC only has a
> > but doesn't include implementation with a concrete network framework.
> > So just curious the rationale behind this?
> > --
> > Hangjun Ye
> Hangjun Ye