Hello, everyone!


As you can see in the title, I am curious about the exact role of the
hadoop-core-*.jar file.


At first I thought it contains every compiled hadoop source files, so It is
necessary for starting every component of Hadoop such as DataNode and


However, even I deleted all hadoop-core-*.jar file in the hadoop home
folder, the script "start-all.sh" runs successfully.


It is strongly related to the classpath, but I am not sure about that.


In addition, when I distributed newly packaged hadoop-core-*.jar file which
contains changed source codes for doing some experiments, the changed
contents are not effected.

This means original, unmodified compiled contents are working in the

So, I cannot see from the one line of log to the some logic to improve HDFS.


Why this issues happens to me?

Is there anyone who let me know about this issue?


Thank you!
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB