Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> 1.5 - how to build rpm; cdh3u4;


Copy link to this message
-
Re: 1.5 - how to build rpm; cdh3u4;
In the example files, specifically accumulo-env.sh, there are 2 commented
lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
out the old one and uncomment the one after the hadoop2 comment.

This is necessary because Accumulo puts the hadoop conf dir on the
classpath in order to load the core-site.xml, which has the HDFS namenode
config. By default, this is file:///, so if it's not there it's goingto
default to the local file system. A quick way to validate is to run
bin/accumulo classpath and then look to see if the conf dir (I don't not
recall what is it for CDH4) is there.
On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <[EMAIL PROTECTED]> wrote:

> I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going on
> cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I built
> the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
> I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
> classpath. This lets me init and start the processes but I've got the
> problem of the instance information being stored on local disk rather than
> on hdfs. (unable obtain instance id at /accumulo/instance_id)
>
> I can see references to this problem elsewhere but I can't figure out what
> I'm doing wrong. Something wrong with my environment when I init i guess..?
> (tbh it's the first time I've tried a cluster install over a standalone so
> it might not have anything to do with the versions I'm trying)
>
> Rob
>
>
>
> On 13 May 2013 12:24, Rob Tallis <[EMAIL PROTECTED]> wrote:
>
> > Perfect, thanks for the help
> >
> >
> > On 11 May 2013 08:37, John Vines <[EMAIL PROTECTED]> wrote:
> >
> >> It also appears that CDH3u* does not have commons-collections or
> >> commons-configuration included, so you will need to manually add those
> >> jars
> >> to the classpath, either in accumulo lib or hadoop lib. Without these
> >> files, tserver and master will not start.
> >>
> >>
> >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <[EMAIL PROTECTED]>
> >> wrote:
> >>
> >> > FWIW, if you don't run -DskipTests, you will get some failures on some
> >> of
> >> > the newer MiniAccumuloCluster tests.
> >> >
> >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
> >> > MiniAccumuloClusterTest)
> >> >   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
> >> >
> >> > Just about the same thing as we were seeing on
> >> https://issues.apache.org/*
> >> > *jira/browse/ACCUMULO-837<
> >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
> >> > My guess would be that we're including the wrong test dependency.
> >> >
> >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
> >> >
> >> >> For info, changing it to cdh3u5, it*does*  work:
> >> >>
> >> >> mvn clean package -P assemble -DskipTests
> >>  -Dhadoop.version=0.20.2-cdh3u5
> >> >> -Dzookeeper.version=3.3.5-**cdh3u5
> >> >>
> >> >
> >> >
> >>
> >>
> >> --
> >> Cheers
> >> ~John
> >>
> >
> >
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB