Jean-Marc Spaggiari 2012-11-22, 16:27
Jean-Marc Spaggiari 2012-11-22, 16:48
Harsh J 2012-11-22, 17:25
Jean-Marc Spaggiari 2012-11-22, 17:33
Harsh J 2012-11-22, 17:45
Again, thanks a lot for all those details.
I read the previous link and I totally understand the HA NameNode. I
already have a zookeeper quorum (3 servers) that I will be able to
re-use. However, I'm running HBase 0.94.2 which is not yet compatible
(I think) with Hadoop 2.0.x. So I will have to go with a non-HA
NameNode until I can migrate to a stable 0.96 HBase version.
Can I "simply" add one directory to dfs.name.dir and restart
my namenode? Is it going to feed all the required information in this
directory? Or do I need to copy the data of the existing one in the
new one before I restart it? Also, does it need a fast transfert rate?
Or will an exteral hard drive (quick to be moved to another server if
required) be enought?
2012/11/22, Harsh J <[EMAIL PROTECTED]>:
> Please follow the tips provided at
> In short, if you use a non-HA NameNode setup:
> - Yes the NN is a very vital persistence point in running HDFS and its
> data should be redundantly stored for safety.
> - You should, in production, configure your NameNode's image and edits
> disk (dfs.name.dir in 1.x+, or dfs.namenode.name.dir in 0.23+/2.x+) to
> be a dedicated one with adequate free space for gradual growth, and
> should configure multiple disks (with one off-machine NFS point highly
> recommended for easy recovery) for adequate redundancy.
> If you instead use a HA NameNode setup (I'd highly recommend doing
> this since it is now available), the presence of > 1 NameNodes and the
> journal log mount or quorum setup would automatically act as
> safeguards for the FS metadata.
> On Thu, Nov 22, 2012 at 11:03 PM, Jean-Marc Spaggiari
> <[EMAIL PROTECTED]> wrote:
>> Hi Harsh,
>> Thanks for pointing me to this link. I will take a close look at it.
>> So with 1.x and 0.23.x, what's the impact on the data if the namenode
>> server hard-drive die? Is there any critical data stored locally? Or I
>> simply need to build a new namenode, start it and restart all my
>> namenodes to find my data back?
>> I can deal with my application not beeing available, but loosing data
>> can be a bigger issue.
>> 2012/11/22, Harsh J <[EMAIL PROTECTED]>:
>>> Hey Jean,
>>> The 1.x, 0.23.x release lines both don't have NameNode HA features.
>>> The current 2.x releases carry HA-NN abilities, and this is documented
>>> On Thu, Nov 22, 2012 at 10:18 PM, Jean-Marc Spaggiari
>>> <[EMAIL PROTECTED]> wrote:
>>>> Replying to myself ;)
>>>> By digging a bit more I figured that 1.0 version is older than 0.23.4
>>>> version and that backupnodes are on 0.23.4. Secondarynamenodes on 1.0
>>>> are now deprecated.
>>>> I'm still a bit mixed up on the way to achieve HA for the namenode
>>>> (1.0 or 0.23.4) but I will continue to dig over internet.
>>>> 2012/11/22, Jean-Marc Spaggiari <[EMAIL PROTECTED]>:
>>>>> I'm reading a bit about hadoop and I'm trying to increase the HA of my
>>>>> current cluster.
>>>>> Today I have 8 datanodes and one namenode.
>>>>> By reading here: http://www.aosabook.org/en/hdfs.html I can see that a
>>>>> Checkpoint node might be a good idea.
>>>>> So I'm trying to start a checkpoint node. I looked at the hadoop
>>>>> online doc. There is a link toe describe the command usage "For
>>>>> command usage, see namenode." but this link is not working. Also, if I
>>>>> try hadoop-deamon.sh start namenode -checkpoint as discribed in the
>>>>> documentation, it's not starting.
>>>>> So I'n wondering, is there anywhere where I can find up to date
>>>>> documentation about the checkpoint node? I will most probably try the
Harsh J 2012-11-22, 18:42
Jean-Marc Spaggiari 2012-11-22, 19:09
Jean-Marc Spaggiari 2012-12-01, 02:11
Jean-Marc Spaggiari 2012-12-01, 02:25
Harsh J 2012-12-01, 04:24
ac@...) 2012-12-01, 04:11