Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # dev >> 1.5 - how to build rpm; cdh3u4;


+
Rob Tallis 2013-05-10, 07:33
+
Christopher 2013-05-10, 14:17
+
John Vines 2013-05-10, 14:50
+
Josh Elser 2013-05-10, 15:14
+
John Vines 2013-05-10, 22:37
+
Rob Tallis 2013-05-13, 02:24
+
Rob Tallis 2013-05-15, 08:06
+
John Vines 2013-05-15, 08:11
+
Rob Tallis 2013-05-15, 09:06
+
Rob Tallis 2013-06-16, 13:16
Copy link to this message
-
Re: 1.5 - how to build rpm; cdh3u4;
Since 1.5 was released, the RPM now expects at least one other profile
to be active, also: the thrift profile. This is because it was decided
during the reviewing of the release candidates for 1.5 that the thrift
bindings for several languages to the new proxy feature, should be
delivered with the new proxy.

The correct command for building the entire RPM for 1.5 would be
(minimally, if we skip tests):
mvn package -DskipTests -P thrift,native,rpm

Typically, one would also activate the seal-jars profile and the docs
profile, as well as build the aggregate javadocs for packaging with
the monitor:
mvn clean compile javadoc:aggregate package -DskipTests -P
docs,seal-jars,thrift,native,rpm

Also, don't expect trunk to work the same way. ACCUMULO-210 is going
to result in changes to the way we build RPMs. Even if we make an
effort to continue to support building the monolithic RPM, there's no
guarantee that the maven profile prerequisites won't change, due to
other improvements in the build. For instance, the docs directory is
now a proper maven module and there are likely going to be changes due
to the discussion of consolidating documentation.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii
On Sun, Jun 16, 2013 at 9:16 AM, Rob Tallis <[EMAIL PROTECTED]> wrote:
> Dragging the rpm question up again, the instruction to create an rpm from
> source was *mvn clean package -P native,rpm*
>
> From a fresh clone, on both trunk and 1.5 I get:
>
> *[ERROR] Failed to execute goal
> org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:attached-rpm (build-bin-rpm)
> on project accumulo: Unable to copy files for packaging: You must set at
> least one file. -> [Help 1]*
>
> and I can't decipher the build setup to figure this out. What am I doing
> wrong?
>
> Thanks, Rob
>
>
> On 15 May 2013 19:06, Rob Tallis <[EMAIL PROTECTED]> wrote:
>
>> That sorted it, thanks.
>>
>>
>> On 15 May 2013 18:11, John Vines <[EMAIL PROTECTED]> wrote:
>>
>>> In the example files, specifically accumulo-env.sh, there are 2 commented
>>> lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
>>> out the old one and uncomment the one after the hadoop2 comment.
>>>
>>> This is necessary because Accumulo puts the hadoop conf dir on the
>>> classpath in order to load the core-site.xml, which has the HDFS namenode
>>> config. By default, this is file:///, so if it's not there it's goingto
>>> default to the local file system. A quick way to validate is to run
>>> bin/accumulo classpath and then look to see if the conf dir (I don't not
>>> recall what is it for CDH4) is there.
>>>
>>>
>>> On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <[EMAIL PROTECTED]> wrote:
>>>
>>> > I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going
>>> on
>>> > cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I
>>> built
>>> > the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
>>> > I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
>>> > classpath. This lets me init and start the processes but I've got the
>>> > problem of the instance information being stored on local disk rather
>>> than
>>> > on hdfs. (unable obtain instance id at /accumulo/instance_id)
>>> >
>>> > I can see references to this problem elsewhere but I can't figure out
>>> what
>>> > I'm doing wrong. Something wrong with my environment when I init i
>>> guess..?
>>> > (tbh it's the first time I've tried a cluster install over a standalone
>>> so
>>> > it might not have anything to do with the versions I'm trying)
>>> >
>>> > Rob
>>> >
>>> >
>>> >
>>> > On 13 May 2013 12:24, Rob Tallis <[EMAIL PROTECTED]> wrote:
>>> >
>>> > > Perfect, thanks for the help
>>> > >
>>> > >
>>> > > On 11 May 2013 08:37, John Vines <[EMAIL PROTECTED]> wrote:
>>> > >
>>> > >> It also appears that CDH3u* does not have commons-collections or
>>> > >> commons-configuration included, so you will need to manually add
>>> those
>>> > >> jars
>>> > >> to the classpath, either in accumulo lib or hadoop lib. Without these
+
Rob Tallis 2013-06-19, 12:40
+
Christopher 2013-06-19, 14:03