Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Re: webapps/ CLASSPATH err


Copy link to this message
-
Re: webapps/ CLASSPATH err

On Feb 19, 2013, at 11:43 , Harsh J wrote:

> Hi Keith,
>
> The webapps/hdfs bundle is present at
> $HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
> tarball. This should get on the classpath automatically as well.

Hadoop 2.0 Yarn does indeed have a share/ dir but Hadoop 2.0 MR1 doesn't have a share/ dir at all.  Is MR1 not usable?  I was hoping to use it as a stepping stone between older versions of Hadoop (for which I have found some EC2 support, not the least being an actual ec2/ dir and associated scripts in src/contrib/ec2) and Yarn, for which I have found no such support, provided scripts, or online walkthroughs yet).  However, I am discovering the H2 MR1 is sufficiently different from older versions of Hadoop that it does not easily extrapolate from those previous successes (the bin/ directory is quite different for one thing).  At the same time, H2 MR1 is also sufficiently different from Yarn that I can't easily extend Yarn advise onto it (as noted, I don't even see a share/ directory in H2 MR1, so I'm not sure how to apply the response above).

> What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
> "aside" tarball or the chief hadoop-2 one?

I figured, as long as I'm trying to us MR1, I would use it exclusively and not touch the Yarn installation at all, so I'm relying entirely on the conf/ and bin/ dirs under MR1 (note that MR1's sbin/ dir only contains a nonexecutable "task-controller", not all the other stuff that Yarn's sbin/ dir contains)...so I'm using MR1's bin/hadoop and bin/hadoop-daemon.sh, nothin else).

> On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
> works fine.

MR1's bin/ dir has no such executable, nor does it have the conventional start-all.sh I'm used to.  I recognize those script names from older versions of Hadoop, but H2 MR1 doesn't provide them.  I'm using hadoop-2.0.0-mr1-cdh4.1.3.

> Another simple check you could do is to try to start with
> "$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
> this way and brings up the NN as a foreground process.

H2 MR1's bin/ dir doesn't have an hdfs executable in it.  Admittedly, H2 Yarn's bin/ dir does.  The following are my H2 MR1 bin/ options:
~/hadoop-2.0.0-mr1-cdh4.1.3/ $ ls bin/
total 60
 4 drwxr-xr-x  2 ec2-user ec2-user  4096 Feb 18 23:45 ./
 4 drwxr-xr-x 17 ec2-user ec2-user  4096 Feb 19 00:08 ../
20 -rwxr-xr-x  1 ec2-user ec2-user 17405 Jan 27 01:07 hadoop*
 8 -rwxr-xr-x  1 ec2-user ec2-user  4356 Jan 27 01:07 hadoop-config.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  3988 Jan 27 01:07 hadoop-daemon.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1227 Jan 27 01:07 hadoop-daemons.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2710 Jan 27 01:07 rcc*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2043 Jan 27 01:07 slaves.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1159 Jan 27 01:07 start-mapred.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1068 Jan 27 01:07 stop-mapred.sh*

________________________________________________________________________________
Keith Wiley     [EMAIL PROTECTED]     keithwiley.com    music.keithwiley.com

"You can scratch an itch, but you can't itch a scratch. Furthermore, an itch can
itch but a scratch can't scratch. Finally, a scratch can itch, but an itch can't
scratch. All together this implies: He scratched the itch from the scratch that
itched but would never itch the scratch from the itch that scratched."
                                           --  Keith Wiley
________________________________________________________________________________