Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> HDFS Corruption: How to Troubleshoot or Determine Root Cause?


+
Time Less 2011-05-18, 00:13
+
Jean-Daniel Cryans 2011-05-18, 00:16
+
Time Less 2011-05-18, 01:22
+
Thanh Do 2011-05-18, 01:31
+
Will Maier 2011-05-18, 10:36
+
Time Less 2011-05-18, 23:41
Copy link to this message
-
Re: HDFS Corruption: How to Troubleshoot or Determine Root Cause?
Hey Tim,

Hope everything is good with you.  Looks like you're having some fun with
hadoop.

>Can anyone enlighten me? Why is dfs.*.dir default to /tmp a good idea?
It's not a good idea, its just how it defaults.  You'll find hundreds or
probably thousands of these quirks as you work with Apache/Cloudera hadoop
distributions.  Never trust the defaults.

> submitted a JIRA
That's the way to do it.

>which appears to have been resolved ... but it does feel somewhat
dissatisfying, since by the time you see the WARNING your cluster is already
useless/dead.
And that's why, if it's relevant to you, you're best bet is to resolve the
JIRA yourself.  Most of the contributors are big picture types who would
look at "small" usability issues like this and scoff about "newbies".  Of
course, by the time you're familiar enough with Hadoop and comfortable
enough to fix your own JIRA's, you might also join the ranks of jaded
contributor who scoffs ad usability issues logged by newbies.

Case in point, I noted a while ago that when you run the namenode -format
command, it only accepts a capital Y (or lower case, can't remember), and it
fails silently if you give the wrong case.  I didn't particularly care
enough to fix it, having already learned my lesson.  You'll find lots of
these rough edges through hadoop, it is not a user firendly, out-of-the-box
enterprise-ready product.

On Wed, May 18, 2011 at 4:41 PM, Time Less <[EMAIL PROTECTED]> wrote:

> Can anyone enlighten me? Why is dfs.*.dir default to /tmp a good idea? I'd
> rather, in order of preference, have the following behaviours if dfs.*.dir
> are undefined:
>
>    1. Daemons log errors and fail to start at all,
>    2. Daemons start but default to /var/db/hadoop (or any persistent
>    location), meanwhile logging in huge screaming all-caps letters that it's
>    picked a default which may not be optimal,
>    3. Daemons start botnet and DDOS random government websites, wait 36
>    hours, then phone the FBI and blame administrator for it*,
>    4. Daemons write "persistent" data into /tmp without any great fanfare,
>    allowing a sense of complacency in its victims, only to report at a random
>    time in the future that everything is corrupted beyond repair, ie current
>    behaviour.
>
> I submitted a JIRA (which appears to have been resolved, yay!) to at least
> add verbiage to the WARNING letting you know why you've irreversibly
> corrupted your cluster, but it does feel somewhat dissatisfying, since by
> the time you see the WARNING your cluster is already useless/dead.
>
> It's not quite what you're asking for, but your NameNode's web interface
>> should
>> provide a merged dump of all the relevant config settings, including
>> comments
>> indicating the name of the config file where the setting was defined, at
>> the
>> /conf path.
>>
>
> Cool, though it looks like that's just the NameNode's config, right? Not
> the DataNode's config, which is the component corrupting data due to this
> default?
>
> --
> Tim Ellis
> Riot Games
> * Hello, FBI, #3 was a joke. I wish #4 was a joke, too.
>
>
+
Aaron Eng 2011-05-18, 23:55
+
Todd Lipcon 2011-05-19, 00:08
+
Time Less 2011-05-19, 01:30
+
Jonathan Disher 2011-05-19, 02:46
+
Todd Lipcon 2011-05-19, 02:51
+
Todd Lipcon 2011-05-19, 07:36
+
Time Less 2011-05-18, 01:45