Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Is it OK to run with no secondary namenode?


Copy link to this message
-
Re: Is it OK to run with no secondary namenode?
If you want to run the 2NN on a different node than the NN, then you need to
set dfs.http.address on the 2NN to point to the namenode's http server
address. See
http://www.cloudera.com/blog/2009/02/10/multi-host-secondarynamenode-configuration/

- Aaron

On Mon, Sep 28, 2009 at 2:17 PM, Todd Lipcon <[EMAIL PROTECTED]> wrote:

> On Mon, Sep 28, 2009 at 11:10 AM, Mayuran Yogarajah <
> [EMAIL PROTECTED]> wrote:
>
> > Hey Todd,
> >
> >  I don't personally like to use the slaves/masters files for managing
> which
> >> daemons run on which nodes. But, if you'd like to, it looks like you
> >> should
> >> put it in the "masters" file, not the slaves file. Look at how
> >> start-dfs.sh
> >> works to understand how those files are used.
> >>
> >> -Todd
> >>
> >>
> >
> > DOH, I meant to say masters, not slaves =(
> > If I may ask, how are you managing the various daemons?
> >
> >
> Using Cloudera's distribution of Hadoop, you can simply use linux init
> scripts to manage which daemons run on which nodes. For a large cluster,
> you'll want to use something like kickstart, cfengine, puppet, etc, to
> manage your configuration, and that includes which init scripts are
> enabled.
>
> -Todd
>