Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # dev >> Current Work on Accumulo in Hoya


Copy link to this message
-
Re: Current Work on Accumulo in Hoya
The stdout and stderr should go to files under
$YARN_LOG_DIR/userlogs/<appid>/<containerid>.  That is, unless you set
yarn.log-aggregation-enable to true in your yarn site file.  In that case,
logs will get aggregated into HDFS, and you can access them with the "yarn
logs" command.
On Tue, Dec 3, 2013 at 3:57 PM, Roshan Punnoose <[EMAIL PROTECTED]> wrote:

> I am now getting an exception when Hoya tries to initialize the accumulo
> cluster:
>
> Service accumulo failed in state STARTED; cause:
> org.apache.hadoop.yarn.service.launcher.ServiceLaunchException: accumulo
> failed with code 1
> org.apache.hadoop.yarn.service.launcher.ServiceLaunchException: accumulo
> failed with code 1
> at
>
> org.apache.hadoop.hoya.yarn.service.ForkedProcessService.reportFailure(ForkedProcessService.java:162)
>
> Any ideas as to where logs of a Forked process may go in Yarn?
>
>
> On Tue, Dec 3, 2013 at 4:24 PM, Roshan Punnoose <[EMAIL PROTECTED]> wrote:
>
> > Ah never mind. Got further. Basically, I had specified
> > the yarn.resourcemanager.address to use the rescourcemanager scheduler
> port
> > by mistake. Using the proper port got me further. Thanks!
> >
> >
> > On Tue, Dec 3, 2013 at 4:17 PM, Roshan Punnoose <[EMAIL PROTECTED]>
> wrote:
> >
> >> Yeah, it seems to be honoring the kinit cache properly and retrieving
> the
> >> correct kerberos ticket for validation.
> >>
> >>
> >> On Tue, Dec 3, 2013 at 4:02 PM, Billie Rinaldi <
> [EMAIL PROTECTED]>wrote:
> >>
> >>> I haven't tried that out yet.  Were you following the instructions at
> >>>
> >>>
> https://github.com/hortonworks/hoya/blob/master/src/site/markdown/security.md
> >>> ?
> >>>
> >>>
> >>> On Tue, Dec 3, 2013 at 12:46 PM, Roshan Punnoose <[EMAIL PROTECTED]>
> >>> wrote:
> >>>
> >>> > I am trying to run Hoya on a Kerberos Secure cluster. I believe I
> have
> >>> all
> >>> > the keytabs in place, and have been able to run mapreduce jobs with
> my
> >>> > user, etc. However, when I run the "hoya create" command I get this
> >>> > exception:
> >>> >
> >>> > org.apache.hadoop.security.AccessControlException: Client cannot
> >>> > authenticate via:[TOKEN]
> >>> > at
> >>> >
> >>> >
> >>>
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:170)
> >>> >
> >>> > I thought that Hoya should be using Kerberos instead of the TOKEN.
> >>> >
> >>> > Also noticed that the SASL NEGOTIATE is responding with "TOKEN" as
> >>> well:
> >>> >
> >>> > 2013-12-03 20:45:04,530 [main] DEBUG security.SaslRpcClient -
> Received
> >>> SASL
> >>> > message state: NEGOTIATE
> >>> > auths {
> >>> >   method: "TOKEN"
> >>> >   mechanism: "DIGEST-MD5"
> >>> >   protocol: ""
> >>> >   serverId: "default"
> >>> > }
> >>> >
> >>> > That doesn't seem right either. Is there something I might be
> missing?
> >>> >
> >>> >
> >>> > On Fri, Oct 18, 2013 at 12:28 PM, Roshan Punnoose <[EMAIL PROTECTED]
> >
> >>> > wrote:
> >>> >
> >>> > > Yeah I noticed the git-flow style branching. Pretty cool.
> >>> > >
> >>> > >
> >>> > > On Fri, Oct 18, 2013 at 12:22 PM, Ted Yu <[EMAIL PROTECTED]>
> >>> wrote:
> >>> > >
> >>> > >> Roshan:
> >>> > >> FYI
> >>> > >> The develop branch of Hoya repo should be more up-to-date.
> >>> > >>
> >>> > >> Cheers
> >>> > >>
> >>> > >>
> >>> > >> On Fri, Oct 18, 2013 at 8:33 AM, Billie Rinaldi <
> >>> > [EMAIL PROTECTED]
> >>> > >> >wrote:
> >>> > >>
> >>> > >> > Adding --debug to the command may print out more things as well.
> >>> >  Also,
> >>> > >> the
> >>> > >> > start-up is not instantaneous.  In the Yarn logs, you should see
> >>> at
> >>> > >> first
> >>> > >> > one container under the application (e.g.
> >>> > >> >
> >>> > >> >
> >>> > >>
> >>> >
> >>>
> logs/userlogs/application_1381800165150_0014/container_1381800165150_0014_01_000001)
> >>> > >> > and its out.txt will contain information about the
> initialization
> >>> > >> process.
> >>> > >> > If that goes well, it will start up containers for the other
> >>> > processes.
> >>> > >> >
> >>> >