As you can see in the title, I am curious about the exact role of the
At first I thought it contains every compiled hadoop source files, so It is
necessary for starting every component of Hadoop such as DataNode and
However, even I deleted all hadoop-core-*.jar file in the hadoop home
folder, the script "start-all.sh" runs successfully.
It is strongly related to the classpath, but I am not sure about that.
In addition, when I distributed newly packaged hadoop-core-*.jar file which
contains changed source codes for doing some experiments, the changed
contents are not effected.
This means original, unmodified compiled contents are working in the
So, I cannot see from the one line of log to the some logic to improve HDFS.
Why this issues happens to me?
Is there anyone who let me know about this issue?