Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> deploy saleforce phoenix coprocessor to hbase/lib??


Copy link to this message
-
Re: deploy saleforce phoenix coprocessor to hbase/lib??
Tian-Ying,
A Phoenix table is an HBase table. At create time, if the HBase table
doesn't exist, we create it initially with the right metadata (so no alter
table is necessary). If the HBase table already exists, then we compare the
existing table meta with the expected table metadata. If it's different,
then we issue an alter table.

You need to restart RS after the deploy of the jar under hbase/lib. Since
the RS is already running, it won't have the Phoenix jar on the classpath
yet (as it wasn't there when you started it). If/when we move to the model
of storing the phoenix jar in HDFS, then you won't have to restart the
first time you deploy. However, for any upgrade to the Phoenix jar, you
will need to restart since that's currently the only way to unload the old
jar and load the new jar.

Thanks,
James
On Wed, Sep 11, 2013 at 11:37 AM, Tianying Chang <[EMAIL PROTECTED]> wrote:

> James, thanks for the explain.
>
> So my understanding is the Phoenix wraps around HBase client API to create
> a  Pheonix table. Within this wrapper, it will call a "alter table" with
> the coprocessor when it create a phoenix table, right?
>
> Also, do we need to restart RS after deploy the jar under hbase/lib? Our
> customers said it has to. But I feel it is unnecessary and weird. Can you
> confirm?
>
> Thanks
> Tian-Ying
>
> -----Original Message-----
> From: James Taylor [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, September 10, 2013 4:40 PM
> To: [EMAIL PROTECTED]
> Subject: Re: deploy saleforce phoenix coprocessor to hbase/lib??
>
> When a table is created with Phoenix, its HBase table is configured with
> the Phoenix coprocessors. We do not specify a jar path, so the Phoenix jar
> that contains the coprocessor implementation classes must be on the
> classpath of the region server.
>
> In addition to coprocessors, Phoenix relies on custom filters which are
> also in the Phoenix jar. In theory you could put the jar in HDFS, use the
> relatively new HBase feature to load custom filters from HDFS, and issue
> alter table calls for existing Phoenix HBase tables to reconfigure the
> coprocessors. When new Phoenix tables are created, though, they wouldn't
> have this jar path.
>
> FYI, we're looking into modifying our install procedure to do the above
> (see https://github.com/forcedotcom/phoenix/issues/216), if folks are
> interested in contributing.
>
> Thanks,
> James
>
> On Sep 10, 2013, at 2:41 PM, Tianying Chang <[EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> > Since this is not a hbase system level jar, instead, it is more like
> user code, should we deploy it under hbase/lib?  It seems we can use
> "alter" to add the coprocessor for a particular user table.  So I can put
> the jar file any place that is accessible, e.g. hdfs:/myPath?
> >
> > My customer said, there is no need to run 'aler' command. Instead, as
> long as I put the jar into hbase/lib, then when phoenix client make read
> call, it will add the the coprocessor attr into that table being read. It
> is kind of suspicious. Does the phoenix client call a "alter" under cover
> for the client  already?
> >
> > Anyone knows about this?
> >
> > Thanks
> > Tian-Ying
>