Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # dev >> run hadoop directly out of trunk checkout?


Copy link to this message
-
Re: run hadoop directly out of trunk checkout?
Eric,

Yesterday I was trying the same, I've used the script from HADOOP-6846
(after doing a s/mapred/mapreduce/g)

then I had to add the hadoop-*JARs to the classpath

then when trying to start the scripts started complaining about things not
found in /usr/share

Then I've given up.

Thxs.

Alejandro

On Tue, Jun 21, 2011 at 2:41 PM, Eric Caspole <[EMAIL PROTECTED]> wrote:

> Is it still possible to run hadoop directly out of a svn checkout and build
> of trunk? A few weeks ago I was using the three variables
> HADOOP_HDFS_HOME/HADOOP_**COMMON_HOME/HADOOP_MAPREDUCE_**HOME and it all
> worked fine. It seems there has been a lot of changes in the scripts, and I
> can't get it to work or figure out what else to set either in the shell env
> or at the top of hadoop-env.sh. I have checked out trunk with a dir
> structure like this:
>
> [trunk]$ pwd
> /home/ecaspole/views/hadoop/**trunk
> [trunk]$ ll
> total 12
> drwxrwxr-x. 12 ecaspole ecaspole 4096 Jun 21 15:55 common
> drwxrwxr-x. 10 ecaspole ecaspole 4096 Jun 21 13:20 hdfs
> drwxrwxr-x. 11 ecaspole ecaspole 4096 Jun 21 16:19 mapreduce
>
> [ecaspole@wsp133572wss hdfs]$ env | grep HADOOP
> HADOOP_HDFS_HOME=/home/**ecaspole/views/hadoop/trunk/**hdfs/
> HADOOP_COMMON_HOME=/home/**ecaspole/views/hadoop/trunk/**common
> HADOOP_MAPREDUCE_HOME=/home/**ecaspole/views/hadoop/trunk/**mapreduce/
>
> [hdfs]$ ./bin/start-dfs.sh
> ./bin/start-dfs.sh: line 54: /home/ecaspole/views/hadoop/**trunk/common/bin/../bin/hdfs:
> No such file or directory
> Starting namenodes on []
> localhost: starting namenode, logging to /home/ecaspole/views/hadoop/**
> trunk/common/logs/ecaspole/**hadoop-ecaspole-namenode-**
> wsp133572wss.amd.com.out
> localhost: Hadoop common not found.
> localhost: starting datanode, logging to /home/ecaspole/views/hadoop/**
> trunk/common/logs/ecaspole/**hadoop-ecaspole-datanode-**
> wsp133572wss.amd.com.out
> localhost: Hadoop common not found.
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
>
> Does anyone else actually run it this way? If so could you show what
> variables you set and where so the components can find each other?
>
> Otherwise, what is the recommended way to run a build of trunk?
> Thanks,
> Eric
>
>
>