Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HBase, mail # user - Best practices for custom filter class distribution?


+
Evan Pollan 2012-06-27, 17:47
+
Amandeep Khurana 2012-06-27, 17:54
+
Evan Pollan 2012-06-27, 20:53
+
Ben Kim 2012-07-30, 10:02
Copy link to this message
-
Re: Best practices for custom filter class distribution?
Michael Segel 2012-06-27, 18:33
One way..,

Create an NFS mountable directory for your cluster and mount on all of the DNs.
You can either place a symbolic link in /usr/lib/hadoop/lib or add the jar to the classpath in /etc/hadoop/conf/hadoop-env.sh
(Assuming Cloudera)
On Jun 27, 2012, at 12:47 PM, Evan Pollan wrote:

> What're the current best practices for making custom Filter implementation
> classes available to the region servers?  My cluster is running 0.90.4 from
> the CDH3U3 distribution, FWIW.
>
> I searched around and didn't find anything other than "add your filter to
> the region server's classpath."  I'm hoping there's support for something
> that doesn't involve actually installing jar files on each region server,
> updating each region server's configuration, and doing a rolling restart of
> the whole cluster...
>
> I did find this still-outstanding bug requesting parity between HDFS-based
> co-processor class loading and filter class loading:
> https://issues.apache.org/jira/browse/HBASE-1936.
>
> How are folks handling this?
>
> The stock filters are fairly limited, especially without the ability (at
> least AFAIK) to combine the existing filters together via basic boolean
> algebra, so I can't do much without writing my own filter(s).
>
>
> thanks,
> Evan
+
Scott Cinnamond 2012-06-27, 20:29