Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce >> mail # user >> RE: a question on NameNode


+
Kartashov, Andy 2012-11-19, 14:43
+
Kai Voigt 2012-11-19, 15:01
+
Kartashov, Andy 2012-11-19, 15:14
+
Ted Dunning 2012-11-19, 16:37
+
Mohammad Tariq 2012-11-19, 15:20
Copy link to this message
-
RE: a question on NameNode
Thank you Kai and Tariq.

From: Mohammad Tariq [mailto:[EMAIL PROTECTED]]
Sent: Monday, November 19, 2012 10:20 AM
To: [EMAIL PROTECTED]
Subject: Re: a question on NameNode

Hello Andy,

    If you have not disabled the speculative execution then your second assumption is correct.

Regards,
    Mohammad Tariq
On Mon, Nov 19, 2012 at 8:44 PM, Kartashov, Andy <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Thank you Kai.. One more question please.

Does MapReduce run tasks of redundant blocks ?

Say you have only 1 block of data replicated 3 times, one block over each of three DNodes, block 1 - DN1 / block 1(replica #1) - DN2 / block1 (replica #2) - DN3

Will MR attempt:
a.       to start 3 Map tasks (one per replicated block) end execute them all

b.      to start 3 Map tasks (one per replicated block) end drop the other two as soon as one of the three executed successfully

c.       will start only 1 Map task (for just one block avoiding all replicated ones) and will attempt to start (another one of the replicated blocks) when and only when the initially task running (say on DN1)failed

Thanks,

From: Kai Voigt [mailto:[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>]
Sent: Monday, November 19, 2012 10:01 AM

To: [EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>
Subject: Re: a question on NameNode
Am 19.11.2012 um 15:43 schrieb "Kartashov, Andy" <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>:

So, what if DN2 is down, i.e. it is not sending any blocks' report.  Then NN (I guess) will figure out that it has 2 blocks (3,4) that has no home and that (without replication) it has no way of reconstructing the file A.txt. It must spit the error then.

One major feature of HDFS is its redundancy. Blocks are stored more than once (three times by default), so chances are good that another DataNode will have that block and report it during the safe mode phase. So the file will be accessible.

Kai

--
Kai Voigt
[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel
+
Kai Voigt 2012-11-19, 15:19