> Phoenix directly embeds Apache Hadoop, Apache HBase and Apache Zookeeper
jars. These jars should be symlinks.
These are replaced by symlinks by the install_phoenix.sh script and
previously were symlinks for me. Has something changed? Will look at this.
> Phoenix required me to delete some old Apache lucene jars from Apache
Flume installation directory.
I think you meant Flume, not Phoenix. Or maybe you meant Flume in both
On Mon, Oct 28, 2013 at 5:47 AM, Bruno Mahé <[EMAIL PROTECTED]> wrote:
> On 10/18/2013 09:54 PM, Roman Shaposhnik wrote:
>> This is the seventh release for Apache Bigtop, version 0.7.0
>> It fixes the following issues:
>> *** Please download, test and vote by Fri 10/25 noon PST
>> Note that we are voting upon the source (tag):
>> Source and binary files:
>> Binary convenience artifacts:
>> Documentation on how to install (just make sure to adjust the repos for
>> Maven staging repo:
>> The tag to be voted upon:
>> Bigtop's KEYS file containing PGP keys we use to sign the release:
> I am not voting yet since I still have some time, but so far I am leaning
> toward a -1.
> I am learning toward a -1 because of https://issues.apache.org/**
> jira/browse/BIGTOP-1129<https://issues.apache.org/jira/browse/BIGTOP-1129>and my issues with Hue.
> Other than that, everything I tested either just works out of the box or
> is nitpick.
> But BIGTOP-1129 is what I would consider a blocker since it is part of the
> basic use case of Apache Bigtop.
> Things I tested:
> * Apache Hadoop and some basic jobs
> * Apache HBase and Phoenix. Just basic testing
> * Apache Flume sending Apache Hadoop and Apache HBase logs to an
> Elasticsearch instance and visualized through Kibana
> * Apache Hue smoke tests
> * Everything running on OpenJDK 6 on ec2 instances
> Things I still want to test (or rather, things I hope I can test by
> Tuesday evening):
> * Apache Pig and datafu
> * Apache Solr
> * Load more data into Phoenix
> Things we could do better:
> * As described on BIGTOP-1129, I could not stop datanode/namenode through
> init scripts.
> * We could provide some templates for Apache Hadoop. I wasted a few hours
> just to get the pi job running. Thankfully we have the init script for hdfs
> (which needs some tweaks for the staging directory) and templates for the
> configuration files in our puppet modules
> * I enabled short-circuit in Apache HBase. Not sure if I missed something,
> but I got some "org.apache.hadoop.security.**AccessControlException:
> Can't continue with getBlockLocalPathInfo() authorization" exceptions. From
> reading http://www.spaggiari.org/**index.php/hbase/how-to-**
> activate-hbase-shortcircuit<http://www.spaggiari.org/index.php/hbase/how-to-activate-hbase-shortcircuit>it seems there are a few things we could do to make it work out of the box
Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)