Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> Current Work on Accumulo in Hoya


Copy link to this message
-
Re: Current Work on Accumulo in Hoya
I am now getting an exception when Hoya tries to initialize the accumulo
cluster:

Service accumulo failed in state STARTED; cause:
org.apache.hadoop.yarn.service.launcher.ServiceLaunchException: accumulo
failed with code 1
org.apache.hadoop.yarn.service.launcher.ServiceLaunchException: accumulo
failed with code 1
at
org.apache.hadoop.hoya.yarn.service.ForkedProcessService.reportFailure(ForkedProcessService.java:162)

Any ideas as to where logs of a Forked process may go in Yarn?
On Tue, Dec 3, 2013 at 4:24 PM, Roshan Punnoose <[EMAIL PROTECTED]> wrote:

> Ah never mind. Got further. Basically, I had specified
> the yarn.resourcemanager.address to use the rescourcemanager scheduler port
> by mistake. Using the proper port got me further. Thanks!
>
>
> On Tue, Dec 3, 2013 at 4:17 PM, Roshan Punnoose <[EMAIL PROTECTED]> wrote:
>
>> Yeah, it seems to be honoring the kinit cache properly and retrieving the
>> correct kerberos ticket for validation.
>>
>>
>> On Tue, Dec 3, 2013 at 4:02 PM, Billie Rinaldi <[EMAIL PROTECTED]>wrote:
>>
>>> I haven't tried that out yet.  Were you following the instructions at
>>>
>>> https://github.com/hortonworks/hoya/blob/master/src/site/markdown/security.md
>>> ?
>>>
>>>
>>> On Tue, Dec 3, 2013 at 12:46 PM, Roshan Punnoose <[EMAIL PROTECTED]>
>>> wrote:
>>>
>>> > I am trying to run Hoya on a Kerberos Secure cluster. I believe I have
>>> all
>>> > the keytabs in place, and have been able to run mapreduce jobs with my
>>> > user, etc. However, when I run the "hoya create" command I get this
>>> > exception:
>>> >
>>> > org.apache.hadoop.security.AccessControlException: Client cannot
>>> > authenticate via:[TOKEN]
>>> > at
>>> >
>>> >
>>> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:170)
>>> >
>>> > I thought that Hoya should be using Kerberos instead of the TOKEN.
>>> >
>>> > Also noticed that the SASL NEGOTIATE is responding with "TOKEN" as
>>> well:
>>> >
>>> > 2013-12-03 20:45:04,530 [main] DEBUG security.SaslRpcClient - Received
>>> SASL
>>> > message state: NEGOTIATE
>>> > auths {
>>> >   method: "TOKEN"
>>> >   mechanism: "DIGEST-MD5"
>>> >   protocol: ""
>>> >   serverId: "default"
>>> > }
>>> >
>>> > That doesn't seem right either. Is there something I might be missing?
>>> >
>>> >
>>> > On Fri, Oct 18, 2013 at 12:28 PM, Roshan Punnoose <[EMAIL PROTECTED]>
>>> > wrote:
>>> >
>>> > > Yeah I noticed the git-flow style branching. Pretty cool.
>>> > >
>>> > >
>>> > > On Fri, Oct 18, 2013 at 12:22 PM, Ted Yu <[EMAIL PROTECTED]>
>>> wrote:
>>> > >
>>> > >> Roshan:
>>> > >> FYI
>>> > >> The develop branch of Hoya repo should be more up-to-date.
>>> > >>
>>> > >> Cheers
>>> > >>
>>> > >>
>>> > >> On Fri, Oct 18, 2013 at 8:33 AM, Billie Rinaldi <
>>> > [EMAIL PROTECTED]
>>> > >> >wrote:
>>> > >>
>>> > >> > Adding --debug to the command may print out more things as well.
>>> >  Also,
>>> > >> the
>>> > >> > start-up is not instantaneous.  In the Yarn logs, you should see
>>> at
>>> > >> first
>>> > >> > one container under the application (e.g.
>>> > >> >
>>> > >> >
>>> > >>
>>> >
>>> logs/userlogs/application_1381800165150_0014/container_1381800165150_0014_01_000001)
>>> > >> > and its out.txt will contain information about the initialization
>>> > >> process.
>>> > >> > If that goes well, it will start up containers for the other
>>> > processes.
>>> > >> >
>>> > >> >
>>> > >> > On Fri, Oct 18, 2013 at 8:20 AM, Roshan Punnoose <
>>> [EMAIL PROTECTED]>
>>> > >> > wrote:
>>> > >> >
>>> > >> > > Ah ok, will check the logs. When the create command did not
>>> seem to
>>> > do
>>> > >> > > anything, I assumed it was just initializing the cluster.json
>>> > >> descriptor
>>> > >> > in
>>> > >> > > hdfs.
>>> > >> > >
>>> > >> > >
>>> > >> > > On Fri, Oct 18, 2013 at 11:15 AM, Billie Rinaldi
>>> > >> > > <[EMAIL PROTECTED]>wrote:
>>> > >> > >
>>> > >> > > > Sounds like we should plan a meetup.  The examples page [1]
>>> has an
>>> > >> > > example
>>> > >> > > > create command to use for Accumulo (it requires a few more
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB