Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> hadoop namenode recovery


Copy link to this message
-
Re: hadoop namenode recovery
Hello,
I have another idea.... regarding solving the single point failure of
Hadoop...
What If I have multiple Name Nodes setup and running behind a load balancer
in the cluster. So this way I can have multiple Name Nodes at the same IP
Address of the load balancer. Which resolves the problem of failure, If one
Name Node goes down, others are working.

Please suggest.... this is just a vague idea..!!

Thanx
On Mon, Jan 14, 2013 at 7:31 PM, Panshul Whisper <[EMAIL PROTECTED]>wrote:

> Hello Bejoy,
>
> Thank you for the information.
> about the Hadoop HA 2.x releases, they are in Alpha phase and I cannot use
> them for production. For my requirements, the cluster is supposed to be
> extremely Available. Availability is of highest concern. I have looked into
> different distributions as well.. such as Hortonworks, they also have the
> same problem of Single point of failure. And are waiting for Apache to
> release the Hadoop 2.x.
>
> I was wondering, if I can somehow configure two Name Nodes on the same
> Network with the same IP Address, but the second name node is redirected
> only after the failure of the primary, that might help in automatic
> resolution of this problem. all the slaves are connecting to the Name Node
> with a network alias in their /etc/hosts file.
> I am trying to implement something like this in the cluster:
> http://networksandservers.blogspot.de/2011/04/failover-clustering-i.html
>
> please suggest if this is possible.
>
> Thanks for your time.
> Regards,
> Panshul.
>
>
> On Mon, Jan 14, 2013 at 7:11 PM, <[EMAIL PROTECTED]> wrote:
>
>> **
>> Hi Panshul
>>
>> SecondaryNameNode is rather known as check point node. At periodic
>> intervals it merges the editlog from NN with FS image to prevent the edit
>> log from growing too large. This is its main functionality.
>>
>> At any point the SNN would have the latest fs image but not the updated
>> edit log. If NN goes down and if you don't have an updated copy of edit log
>> you can use the fsImage from SNN for restoring. In that case you lose the
>> transactions in edit log.
>>
>> SNN is not a backup NN it is just a check point node.
>>
>> Two or more NN are not possible in 1.x releases but federation makes it
>> possible with 2.x releases. Federation is for different purpose, you should
>> be looking at hadoop HA currently with 2.x releases.
>> Regards
>> Bejoy KS
>>
>> Sent from remote device, Please excuse typos
>> ------------------------------
>> *From: * Panshul Whisper <[EMAIL PROTECTED]>
>> *Date: *Mon, 14 Jan 2013 19:04:24 -0800
>> *To: *<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
>> *Subject: *Re: hadoop namenode recovery
>>
>> thank you for the reply.
>>
>> Is there a way with which I can configure my cluster to switch to the
>> Secondary Name Node automatically in case of the Primary Name Node failure?
>>  When I run my current Hadoop, I see the primary and secondary both Name
>> nodes running. I was wondering what is that Secondary Name Node for? and
>> where is it configured?
>> I was also wondering, is it possible to have two or more Name nodes
>> running in the same cluster?
>>
>> Thanks,
>> Regards,
>> Panshul.
>>
>>
>> On Mon, Jan 14, 2013 at 6:50 PM, <[EMAIL PROTECTED]> wrote:
>>
>>> **
>>> Hi Panshul,
>>>
>>> Usually for reliability there will be multiple dfs.name.dir configured.
>>> Of which one would be a remote location such as a nfs mount.
>>> So that even if the NN machine crashes on a whole you still have the fs
>>> image and edit log in nfs mount. This can be utilized for reconstructing
>>> the NN back again.
>>>
>>>
>>> Regards
>>> Bejoy KS
>>>
>>> Sent from remote device, Please excuse typos
>>> ------------------------------
>>> *From: * Panshul Whisper <[EMAIL PROTECTED]>
>>> *Date: *Mon, 14 Jan 2013 17:25:08 -0800
>>> *To: *<[EMAIL PROTECTED]>
>>> *ReplyTo: * [EMAIL PROTECTED]
>>> *Subject: *hadoop namenode recovery
>>>
>>> Hello,
>>>
>>> Is there a standard way to prevent the failure of Namenode crash in a

Regards,
Ouch Whisper
010101010101
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB