Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> Current Work on Accumulo in Hoya


Copy link to this message
-
Re: Current Work on Accumulo in Hoya
The accumulo script requires the conf files to be present.  But if you have
some conf files, you can then connect the shell to any instance with the -z
flag.  We could consider having a client script with fewer requirements.

I tried it with just the accumulo-env.sh file, and it worked but ate all
the log messages (so you couldn't see what was going on when there were
errors).  I'd recommend dropping in log4j.properties too.
On Wed, Dec 4, 2013 at 1:00 PM, Roshan Punnoose <[EMAIL PROTECTED]> wrote:

> I get:
>
> Accumulo is not properly configured.
>
> Try running $ACCUMULO_HOME/bin/bootstrap_config.sh and then editing
> $ACCUMULO_HOME/conf/accumulo-env.sh
>
> My guess is that the conf directory needs to be semi populated with at
> least the accumulo-env.sh?
>
>
> On Wed, Dec 4, 2013 at 3:40 PM, Eric Newton <[EMAIL PROTECTED]> wrote:
>
> > use the "-z" option:
> >
> > $ ./bin/accumulo shell -u root z instance zoo1,zoo2,zoo3
> >
> > -Eric
> >
> >
> > On Wed, Dec 4, 2013 at 3:13 PM, Roshan Punnoose <[EMAIL PROTECTED]>
> wrote:
> >
> > > This is cool. I couldn't get it working with 1.5.0, but 1.7.0-SNAPSHOT
> > > worked perfectly. (I'll probably just downgrade sometime soon, or wait
> > for
> > > a release)
> > >
> > > I had to add this property to the hoya-client.xml to get it to look for
> > the
> > > hadoop/zookeeper jars in the right places. (Though that seems property
> > > seems to already be set in the yarn-site.xml):
> > > <property>
> > >       <name>yarn.application.classpath</name>
> > >
> > >
> > >
> >
> <value>/etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib/hadoop/lib/*,/usr/lib/hadoop-hdfs/*,/usr/lib/hadoop-hdfs/lib/*,/usr/lib/hadoop-yarn/*,/usr/lib/hadoop-yarn/lib/*,/usr/lib/hadoop-mapreduce/*,/usr/lib/hadoop-mapreduce/lib/*,/usr/lib/zookeeper/*</value>
> > >  </property>
> > >
> > > Also, any ideas on how to get the shell connected to it without a conf
> > > directory? I can just use the generated conf with the shell for now.
> > >
> > > Roshan
> > >
> > >
> > > On Wed, Dec 4, 2013 at 11:25 AM, Billie Rinaldi <
> > [EMAIL PROTECTED]
> > > >wrote:
> > >
> > > > Interesting, let us know if having the conf populated in the tarball
> > > makes
> > > > a difference.  I'd recommend using 1.5.1-SNAPSHOT, by the way.  1.5.0
> > > > processes don't return proper exit codes when there are errors.
> > > >
> > > >
> > > > On Wed, Dec 4, 2013 at 8:19 AM, Roshan Punnoose <[EMAIL PROTECTED]>
> > > wrote:
> > > >
> > > > > I was able to get most of the way there. Turning off the log
> > > aggregation
> > > > > helped a lot, the forked exceptions were not getting to the
> > aggregated
> > > > > TFile in HDFS.
> > > > >
> > > > > I am trying to run accumulo 1.5.0 and for some reason, the
> > > propagtedConf
> > > > > that Hoya generates is not getting loaded during the accumulo
> > > initialize
> > > > > phase. I think it has to do with the fact that I already have a
> > > populated
> > > > > conf directory (with a sample accumulo-site.xml) in the accumulo
> > image
> > > I
> > > > am
> > > > > sending. I'm going to try and build a new accumulo image from
> source
> > > and
> > > > > try again with Hoya 0.7.0. The error I am seeing makes it seem like
> > the
> > > > > Accumulo Initialize is not looking at the propgatedConf
> > > > "instance.dfs.dir"
> > > > > property but using the default to put the data in "/accumulo" in
> > HDFS.
> > > > >
> > > > > Will keep trying. Thanks for the help!
> > > > >
> > > > >
> > > > > On Wed, Dec 4, 2013 at 4:13 AM, Steve Loughran <
> > [EMAIL PROTECTED]
> > > > > >wrote:
> > > > >
> > > > > > The forked code goes into the AM logs as its just a forked run of
> > > > > > {{accumulo init}} to set up the file structure.
> > > > > >
> > > > > > Error code 1 implies accumulo didn't want to start, which could
> be
> > > from
> > > > > > some environment problem -it needs to know where ZK home as well
> as
> > > > > hadoop
> > > > > > home are. We set those up before running accumulo, but they do
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB