Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS, mail # user - How to run Fault injection in HDFS


Copy link to this message
-
Re: How to run Fault injection in HDFS
Konstantin Boudnik 2009-11-20, 19:08
Hi Thanh.

hmm, it sounds like you have some issue with compilation of your code.

addDeprication() has been added to Configuration in 0.21, I believe. And it is
there no matter how do you compile your code (with FI or without).

Cos

On 11/19/09 10:12 , Thanh Do wrote:
> Sorry to dig this thread again!
>
> I am expecting the release of 0.21 so that I don't have to manually play
> around with AspectJ FI any more.
>
> I still have problem with running HDFS with instrumented code (with aspect).
>
> Here is what I did:
>
> In the root directory of HDFS:
> /$ ant injectfaults
>
> $ ant jar-fault-inject
> /At this point, i have a jar file containing hdfs classed, namely,
> /hadoop-hdfs-0.22.0-dev-fi.jar/, located in /build-fi/ folder.
>
> Now I go to the HADOOP folder (which contains running script in bin
> directory), and do the following
> /$ ant compile-core-classes/
> ( now I need additional hdfs classes to be able to run /start-dfs.sh/,
> right)
> What I did is copying /$HDFS/build-fi/hadoop-hdfs-0.22.0-dev-fi.jar /to
> /$HADOOP/hadoop-hdfs-fi-core.jar/ (I need to add suffix "core" since the
> script will include all hadoop-*-core.jar in classpath)
>
> /$ bin/start-dfs.sh/
> and got error message:
>
> 2009-11-19 11:52:57,479 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.NoSuchMethodError:
> org.apache.hadoop.conf.Configuration.addDeprecation(Ljava/lang/String;[Ljava/lang/String;)V
>          at
> org.apache.hadoop.hdfs.HdfsConfiguration.deprecate(HdfsConfiguration.java:44)
>          at
> org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:48)
>          at
> org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:28)
>          at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
>          at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1199)
>
> 2009-11-19 11:52:57,480 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
> Could any one tell me how to solve this problem?
>
> Thank you so much.
>
>
> On Thu, Oct 8, 2009 at 10:41 AM, Konstantin Boudnik <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     Thanks for looking into fault injection - it's very interesting and
>     useful technique based on AspectJ.
>
>     Currently, it is fully integrated into HDFS only. There's a JIRA
>     (HADOOP-6204) which tracks the same effort for Common and then all
>     Hadoop's components will have injection (as well as fault injection)
>     in place. This JIRA should be committed in the matter of a couple of
>     weeks.
>
>     For the immediate purpose you don't need to patch anything or do any
>     tweaking of the code: the fault injection framework is in already
>     and ready to work.
>
>     For your current needs: to be able to run HDFS with instrumented
>     code you need to run a special build. To do so:
>       - % ant injectfaults - similar to a 'normal' build, but does
>     instrument the code with aspects located under src/test/aop/**
>       - % ant jar-fault-inject - similar to a 'normal' jar creation but
>     instrumented
>       - % ant jar-test-fault-inject - similar to a 'normal' jar-test
>     creation but instrumented
>
>     Now, if you have the rest of sub-projects built you need to move the
>     instrumented jar files on top of the 'normal' files in your
>     installation directory. Please note that some renaming has to be
>     done: injected jar files have '-fi' suffix in their names and normal
>     jar files don't have such. Thus currently you'll have to rename
>     those injected jars to pretend like they are normal, used by
>     configured's classpath.
>
>     At this point you all set: you have a production quality Hadoop with
>     injected HDFS. As soon as the aforementioned JIRA is ready and
>     committed we'd be able to provide Hadoop-injected version by the
>     build's means rather than doing any renaming and manual intervention.