Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: Recovering the namenode from failure


Copy link to this message
-
Re: Recovering the namenode from failure
I think he's mentioned the new NN is the same IP and Hostname as the old
one, and uses an actual checkpoint. All he has to do is start the DNs back
up again and they should report in fine.
On Tue, May 21, 2013 at 10:03 PM, Michael Segel
<[EMAIL PROTECTED]>wrote:

> I think what he's missing is to change the configurations to point to the
> new name node.
>
> It sounds like the new NN has a different IP address from the old NN so
> the DNs don't know who to report to...
>
> On May 21, 2013, at 11:23 AM, Todd Lipcon <[EMAIL PROTECTED]> wrote:
>
> Hi David,
>
> You shouldn't need to do anything to get your DNs to report in -- as best
> they can tell, it's the same NN. Do you see any error messages in the DN
> logs?
>
> -Todd
>
> On Tue, May 21, 2013 at 12:30 AM, David Parks <[EMAIL PROTECTED]>wrote:
>
>> I’m on CDH4, and trying to recover both the namenode and cloudera manager
>> VMs from HDFS after losing the namenode.****
>>
>> ** **
>>
>> All of our backup VMs are on HDFS, so for the moment I just want to hack
>> something together, copy the backup VMs off HDFS and get on with properly
>> reconfiguring via CDH Manger.****
>>
>> ** **
>>
>> So I’ve installed a plain ‘ol namenode on one of my cluster nodes and
>> started it with –importCheckpoint (with the data from the secondary NN),
>> this seems to have worked, I have a namenode web UI up which expects to
>> find 32178 blocks.****
>>
>> ** **
>>
>> But my plain namenode (on the same hostname and IP as the old namenode)
>> says that there are no datanodes in the cluster.****
>>
>> ** **
>>
>> What do I need in order to configure the datanodes to report their blocks
>> into this new namenode (same IP & hostname)?****
>>
>> ** **
>>
>> Thanks,****
>>
>> David****
>>
>> ** **
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>
>
>
--
Harsh J