Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> SNN


On 09/04/2012 06:33 PM, Michael Segel wrote:
> The other question you have to look at is the underlying start and stop script to see what is being passed on to them.
>
> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>
> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it?
>
> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>
>
> On Sep 4, 2012, at 11:05 AM, Terry Healy <[EMAIL PROTECTED]> wrote:
>
>> Can you please show contents of masters and slaves config files?
>>
>>
>>
ok, thank you michael for the hint (and terry for your answer)

the problem arose from the change in hadoop-env.sh of the default
setting for HADOOP_SLAVES. I choose a different location from the default.

two changes makes the script succeed:

1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
secondary namenode but right after that it executes the hadoop-env.sh
that reverts it to "pathtoconf/slaves".

So the solution is moving the three lines after the block in which
HADOOP_SLAVES is set before the block.

changes in hadoop-config in pathtohadoop/libexec/
with diff:
56,59d55
< if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
<   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
< fi
<
72a69,71
> if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
>   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> fi

2) slaves.sh assigns the value to HOSTLIST (which is created to store
the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
without arguments

The solution is again a movement of some code lines.

41,45d40
< # If the slaves file is specified in the command line,
< # then it takes precedence over the definition in
< # hadoop-env.sh. Save it here.
< HOSTLIST=$HADOOP_SLAVES
<
51a47,51
> # If the slaves file is specified in the command line,
> # then it takes precedence over the definition in
> # hadoop-env.sh. Save it here.
> HOSTLIST=$HADOOP_SLAVES
>

should I open a JIRA?
thank you
giovanni aka surfer
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB