Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # dev >> Proposal: abandon protocol translator layer for cluster-internal RPCs

Copy link to this message
Re: Proposal: abandon protocol translator layer for cluster-internal RPCs
I am sure Jitendra understands what Todd meant, given he was quite involved
in the work. As Jitendra said, I would like to keep the wire type from the
implementation type. Even for internal protocols. Rolling upgrades is

I understand where Todd is coming from. We did this work for 10 protocols.
But it is not like every day we introduce protocols and make huge protocol
changes. Given that I prefer to retain the structure we have in place. I
also volunteer to do this work, for protocols that are introduced.


On Mon, Mar 19, 2012 at 8:11 AM, Robert Evans <[EMAIL PROTECTED]> wrote:

> I think what we are talking about here is removing some of the extra
> layers of abstraction in java.  The wire protocol used will be identical in
> either case.  It is just that we would have to use the Protocol Buffer
> Builder APIs instead of wrapping them with our own custom getters/setters.
> I am +1 for reducing the layers needed to add/modify the RPC.
> --Bobby Evans
> On 3/19/12 2:17 AM, "Jitendra Pandey" <[EMAIL PROTECTED]> wrote:
>  Wire compatibility in hdfs private protocols between different
> components is also important for rolling upgrades. We do want to support
> upgrading different components of a cluster independent of each other and
> wire compatibility is one of the essential prerequisites. Therefore, even
> if some protocols are not exposed to the users, and are just used within
> different internal components, we still cannot afford to compromise on wire
> compatibility on those interfaces.
> On Sat, Mar 17, 2012 at 3:31 PM, Todd Lipcon <[EMAIL PROTECTED]> wrote:
> > Hi all,
> >
> > I've been working on some patches recently that required adding a new
> > protocol and some RPC calls that will be used entirely internally to
> > HDFS -- i.e the types and functions are never exposed to clients. The
> > process to do this involved:
> > 1) Add a new .proto file MyProtocol.proto with the types and the
> > service definition
> > 2) Add a new empty Java interface MyProtocolPB.java which adds the
> > ProtocolInfo and KerberosInfo annotations
> > 3) Add a new Java interface MyProtocol.java which duplicates the same
> > methods I defined in the protobuf service
> > 4) For each new type, create a new Java class which duplicates the
> > fields, getters, and setters from the protobuf messages
> > 5) Create a Client-Side Translator and Server-Side Translator class,
> > each containing a wrapper method for each of the calls
> > 6) Create a PBHelper class which contains two convert() methods for
> > each of the new types
> >
> > Given that we have many protocols that we never intend to expose, I
> > see little benefit to adding the indirection layer here. It only makes
> > the task of modifying the protocols quite onerous and full of
> > duplicate boilerplate code.
> > I'd like to propose that, when adding protocols that are meant to be
> > HDFS-private, we drop steps 3-6 and use the protobuf RPC engine
> > directly. Doing this doesn't force our hand or limit our options in
> > the future -- should we want to add an alternate mechanism one can
> > always add the indirection layer down the road.
> >
> > Thoughts?
> >
> > -Todd
> > --
> > Todd Lipcon
> > Software Engineer, Cloudera
> >
> --