Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> Current Work on Accumulo in Hoya


Copy link to this message
-
Re: Current Work on Accumulo in Hoya
use the "-z" option:

$ ./bin/accumulo shell -u root z instance zoo1,zoo2,zoo3

-Eric
On Wed, Dec 4, 2013 at 3:13 PM, Roshan Punnoose <[EMAIL PROTECTED]> wrote:

> This is cool. I couldn't get it working with 1.5.0, but 1.7.0-SNAPSHOT
> worked perfectly. (I'll probably just downgrade sometime soon, or wait for
> a release)
>
> I had to add this property to the hoya-client.xml to get it to look for the
> hadoop/zookeeper jars in the right places. (Though that seems property
> seems to already be set in the yarn-site.xml):
> <property>
>       <name>yarn.application.classpath</name>
>
>
> <value>/etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib/hadoop/lib/*,/usr/lib/hadoop-hdfs/*,/usr/lib/hadoop-hdfs/lib/*,/usr/lib/hadoop-yarn/*,/usr/lib/hadoop-yarn/lib/*,/usr/lib/hadoop-mapreduce/*,/usr/lib/hadoop-mapreduce/lib/*,/usr/lib/zookeeper/*</value>
>  </property>
>
> Also, any ideas on how to get the shell connected to it without a conf
> directory? I can just use the generated conf with the shell for now.
>
> Roshan
>
>
> On Wed, Dec 4, 2013 at 11:25 AM, Billie Rinaldi <[EMAIL PROTECTED]
> >wrote:
>
> > Interesting, let us know if having the conf populated in the tarball
> makes
> > a difference.  I'd recommend using 1.5.1-SNAPSHOT, by the way.  1.5.0
> > processes don't return proper exit codes when there are errors.
> >
> >
> > On Wed, Dec 4, 2013 at 8:19 AM, Roshan Punnoose <[EMAIL PROTECTED]>
> wrote:
> >
> > > I was able to get most of the way there. Turning off the log
> aggregation
> > > helped a lot, the forked exceptions were not getting to the aggregated
> > > TFile in HDFS.
> > >
> > > I am trying to run accumulo 1.5.0 and for some reason, the
> propagtedConf
> > > that Hoya generates is not getting loaded during the accumulo
> initialize
> > > phase. I think it has to do with the fact that I already have a
> populated
> > > conf directory (with a sample accumulo-site.xml) in the accumulo image
> I
> > am
> > > sending. I'm going to try and build a new accumulo image from source
> and
> > > try again with Hoya 0.7.0. The error I am seeing makes it seem like the
> > > Accumulo Initialize is not looking at the propgatedConf
> > "instance.dfs.dir"
> > > property but using the default to put the data in "/accumulo" in HDFS.
> > >
> > > Will keep trying. Thanks for the help!
> > >
> > >
> > > On Wed, Dec 4, 2013 at 4:13 AM, Steve Loughran <[EMAIL PROTECTED]
> > > >wrote:
> > >
> > > > The forked code goes into the AM logs as its just a forked run of
> > > > {{accumulo init}} to set up the file structure.
> > > >
> > > > Error code 1 implies accumulo didn't want to start, which could be
> from
> > > > some environment problem -it needs to know where ZK home as well as
> > > hadoop
> > > > home are. We set those up before running accumulo, but they do need
> to
> > be
> > > > passed down to the cluster config (which is then validated to see
> that
> > > they
> > > > are defined and point to a local directory -but we don't look in the
> > > > directory to see if they have all the JARs the accumulo launcher
> > expects)
> > > >
> > > > If you can, try to do this with kerberos off first. Kerberos
> > complicates
> > > > things
> > > >
> > > >
> > > >
> > > >
> > > > On 3 December 2013 23:57, Roshan Punnoose <[EMAIL PROTECTED]> wrote:
> > > >
> > > > > I am now getting an exception when Hoya tries to initialize the
> > > accumulo
> > > > > cluster:
> > > > >
> > > > > Service accumulo failed in state STARTED; cause:
> > > > > org.apache.hadoop.yarn.service.launcher.ServiceLaunchException:
> > > accumulo
> > > > > failed with code 1
> > > > > org.apache.hadoop.yarn.service.launcher.ServiceLaunchException:
> > > accumulo
> > > > > failed with code 1
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hoya.yarn.service.ForkedProcessService.reportFailure(ForkedProcessService.java:162)
> > > > >
> > > > > Any ideas as to where logs of a Forked process may go in Yarn?
> > > > >
> > > > >
> > > > > On Tue, Dec 3, 2013 at 4:24 PM, Roshan Punnoose <[EMAIL PROTECTED]
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB