Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> Current Work on Accumulo in Hoya


Copy link to this message
-
Re: Current Work on Accumulo in Hoya
I was able to get most of the way there. Turning off the log aggregation
helped a lot, the forked exceptions were not getting to the aggregated
TFile in HDFS.

I am trying to run accumulo 1.5.0 and for some reason, the propagtedConf
that Hoya generates is not getting loaded during the accumulo initialize
phase. I think it has to do with the fact that I already have a populated
conf directory (with a sample accumulo-site.xml) in the accumulo image I am
sending. I'm going to try and build a new accumulo image from source and
try again with Hoya 0.7.0. The error I am seeing makes it seem like the
Accumulo Initialize is not looking at the propgatedConf "instance.dfs.dir"
property but using the default to put the data in "/accumulo" in HDFS.

Will keep trying. Thanks for the help!
On Wed, Dec 4, 2013 at 4:13 AM, Steve Loughran <[EMAIL PROTECTED]>wrote:

> The forked code goes into the AM logs as its just a forked run of
> {{accumulo init}} to set up the file structure.
>
> Error code 1 implies accumulo didn't want to start, which could be from
> some environment problem -it needs to know where ZK home as well as hadoop
> home are. We set those up before running accumulo, but they do need to be
> passed down to the cluster config (which is then validated to see that they
> are defined and point to a local directory -but we don't look in the
> directory to see if they have all the JARs the accumulo launcher expects)
>
> If you can, try to do this with kerberos off first. Kerberos complicates
> things
>
>
>
>
> On 3 December 2013 23:57, Roshan Punnoose <[EMAIL PROTECTED]> wrote:
>
> > I am now getting an exception when Hoya tries to initialize the accumulo
> > cluster:
> >
> > Service accumulo failed in state STARTED; cause:
> > org.apache.hadoop.yarn.service.launcher.ServiceLaunchException: accumulo
> > failed with code 1
> > org.apache.hadoop.yarn.service.launcher.ServiceLaunchException: accumulo
> > failed with code 1
> > at
> >
> >
> org.apache.hadoop.hoya.yarn.service.ForkedProcessService.reportFailure(ForkedProcessService.java:162)
> >
> > Any ideas as to where logs of a Forked process may go in Yarn?
> >
> >
> > On Tue, Dec 3, 2013 at 4:24 PM, Roshan Punnoose <[EMAIL PROTECTED]>
> wrote:
> >
> > > Ah never mind. Got further. Basically, I had specified
> > > the yarn.resourcemanager.address to use the rescourcemanager scheduler
> > port
> > > by mistake. Using the proper port got me further. Thanks!
> > >
> > >
> > > On Tue, Dec 3, 2013 at 4:17 PM, Roshan Punnoose <[EMAIL PROTECTED]>
> > wrote:
> > >
> > >> Yeah, it seems to be honoring the kinit cache properly and retrieving
> > the
> > >> correct kerberos ticket for validation.
> > >>
> > >>
> > >> On Tue, Dec 3, 2013 at 4:02 PM, Billie Rinaldi <
> > [EMAIL PROTECTED]>wrote:
> > >>
> > >>> I haven't tried that out yet.  Were you following the instructions at
> > >>>
> > >>>
> >
> https://github.com/hortonworks/hoya/blob/master/src/site/markdown/security.md
> > >>> ?
> > >>>
> > >>>
> > >>> On Tue, Dec 3, 2013 at 12:46 PM, Roshan Punnoose <[EMAIL PROTECTED]>
> > >>> wrote:
> > >>>
> > >>> > I am trying to run Hoya on a Kerberos Secure cluster. I believe I
> > have
> > >>> all
> > >>> > the keytabs in place, and have been able to run mapreduce jobs with
> > my
> > >>> > user, etc. However, when I run the "hoya create" command I get this
> > >>> > exception:
> > >>> >
> > >>> > org.apache.hadoop.security.AccessControlException: Client cannot
> > >>> > authenticate via:[TOKEN]
> > >>> > at
> > >>> >
> > >>> >
> > >>>
> >
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:170)
> > >>> >
> > >>> > I thought that Hoya should be using Kerberos instead of the TOKEN.
> > >>> >
> > >>> > Also noticed that the SASL NEGOTIATE is responding with "TOKEN" as
> > >>> well:
> > >>> >
> > >>> > 2013-12-03 20:45:04,530 [main] DEBUG security.SaslRpcClient -
> > Received
> > >>> SASL
> > >>> > message state: NEGOTIATE
> > >>> > auths {
> > >>> >   method: "TOKEN"
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB