Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo, mail # dev - 1.5 - how to build rpm; cdh3u4;


+
Rob Tallis 2013-05-10, 07:33
+
Christopher 2013-05-10, 14:17
+
John Vines 2013-05-10, 14:50
+
Josh Elser 2013-05-10, 15:14
+
John Vines 2013-05-10, 22:37
+
Rob Tallis 2013-05-13, 02:24
+
Rob Tallis 2013-05-15, 08:06
+
John Vines 2013-05-15, 08:11
+
Rob Tallis 2013-05-15, 09:06
Copy link to this message
-
Re: 1.5 - how to build rpm; cdh3u4;
Rob Tallis 2013-06-16, 13:16
Dragging the rpm question up again, the instruction to create an rpm from
source was *mvn clean package -P native,rpm*

>From a fresh clone, on both trunk and 1.5 I get:

*[ERROR] Failed to execute goal
org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:attached-rpm (build-bin-rpm)
on project accumulo: Unable to copy files for packaging: You must set at
least one file. -> [Help 1]*

and I can't decipher the build setup to figure this out. What am I doing
wrong?

Thanks, Rob
On 15 May 2013 19:06, Rob Tallis <[EMAIL PROTECTED]> wrote:

> That sorted it, thanks.
>
>
> On 15 May 2013 18:11, John Vines <[EMAIL PROTECTED]> wrote:
>
>> In the example files, specifically accumulo-env.sh, there are 2 commented
>> lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
>> out the old one and uncomment the one after the hadoop2 comment.
>>
>> This is necessary because Accumulo puts the hadoop conf dir on the
>> classpath in order to load the core-site.xml, which has the HDFS namenode
>> config. By default, this is file:///, so if it's not there it's goingto
>> default to the local file system. A quick way to validate is to run
>> bin/accumulo classpath and then look to see if the conf dir (I don't not
>> recall what is it for CDH4) is there.
>>
>>
>> On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <[EMAIL PROTECTED]> wrote:
>>
>> > I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going
>> on
>> > cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I
>> built
>> > the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
>> > I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
>> > classpath. This lets me init and start the processes but I've got the
>> > problem of the instance information being stored on local disk rather
>> than
>> > on hdfs. (unable obtain instance id at /accumulo/instance_id)
>> >
>> > I can see references to this problem elsewhere but I can't figure out
>> what
>> > I'm doing wrong. Something wrong with my environment when I init i
>> guess..?
>> > (tbh it's the first time I've tried a cluster install over a standalone
>> so
>> > it might not have anything to do with the versions I'm trying)
>> >
>> > Rob
>> >
>> >
>> >
>> > On 13 May 2013 12:24, Rob Tallis <[EMAIL PROTECTED]> wrote:
>> >
>> > > Perfect, thanks for the help
>> > >
>> > >
>> > > On 11 May 2013 08:37, John Vines <[EMAIL PROTECTED]> wrote:
>> > >
>> > >> It also appears that CDH3u* does not have commons-collections or
>> > >> commons-configuration included, so you will need to manually add
>> those
>> > >> jars
>> > >> to the classpath, either in accumulo lib or hadoop lib. Without these
>> > >> files, tserver and master will not start.
>> > >>
>> > >>
>> > >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <[EMAIL PROTECTED]>
>> > >> wrote:
>> > >>
>> > >> > FWIW, if you don't run -DskipTests, you will get some failures on
>> some
>> > >> of
>> > >> > the newer MiniAccumuloCluster tests.
>> > >> >
>> > >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
>> > >> > MiniAccumuloClusterTest)
>> > >> >   test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
>> > >> >
>> > >> > Just about the same thing as we were seeing on
>> > >> https://issues.apache.org/*
>> > >> > *jira/browse/ACCUMULO-837<
>> > >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
>> > >> > My guess would be that we're including the wrong test dependency.
>> > >> >
>> > >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
>> > >> >
>> > >> >> For info, changing it to cdh3u5, it*does*  work:
>> > >> >>
>> > >> >> mvn clean package -P assemble -DskipTests
>> > >>  -Dhadoop.version=0.20.2-cdh3u5
>> > >> >> -Dzookeeper.version=3.3.5-**cdh3u5
>> > >> >>
>> > >> >
>> > >> >
>> > >>
>> > >>
>> > >> --
>> > >> Cheers
>> > >> ~John
>> > >>
>> > >
>> > >
>> >
>>
>
>
+
Christopher 2013-06-16, 16:10
+
Rob Tallis 2013-06-19, 12:40
+
Christopher 2013-06-19, 14:03