Hello, Harsh!

I am Yoonmin from South Korea.

I have a question for you about hadoop classpath.
My hadoop home folder is same in all nodes in my cluster :
/home/hadoop/hadoop-1.2.1

Then, I compiled my own modified version of hadoop, so I have a file named
"hadoop-core-1.2.2.SNAPSHOT.jar"

Using scp, I distributed that file to all of my nodes at
/home/hadoop/hadoop-1.2.1 and /home/hadoop/hadoop-1.2.1/build.
Then, I restart dfs and mapred. But changed things are not effected when I
submitted new job into my cluster.

I think it was strange. So, I delete hadoop-core-1.2.2.SNAPSHOP.jar from two
folders as I mentioned.
However, the dfs and mapred are started successfully!

What's going on in my hadoop?

I think it was strongly related to the classpath, so I am asking you what
was wrong?

Thank you!

Best Regards
Yoonmin

-----Original Message-----
From: Harsh J [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 28, 2014 7:20 AM
To: [EMAIL PROTECTED]
Subject: Re: how to set org.apache.hadoop classpath?

The imports referenced in your error come from the hadoop-common jar, and
won't be present in the hadoop-mapreduce-client-core jar.

The most ideal way to set your compile classpath would be to rely on the
"hadoop classpath" command. Provided $HADOOP_PREFIX/bin/ is on your $PATH,
do the below:

export CLASSPATH=$(hadoop classpath)
javac WordCount.java

On Mon, Jan 27, 2014 at 1:43 PM, EdwardKing <[EMAIL PROTECTED]> wrote:
.:/home/software/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client
-core-2.2.0.jar:
still raise above error? Where is wrong?
error,please immediately notify the sender by return e-mail, and delete the
original message and all copies from your system. Thank you.

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB