Jean-Marc Spaggiari 2012-11-22, 16:27
Jean-Marc Spaggiari 2012-11-22, 16:48
Harsh J 2012-11-22, 17:25
Jean-Marc Spaggiari 2012-11-22, 17:33
Harsh J 2012-11-22, 17:45
Jean-Marc Spaggiari 2012-11-22, 18:29
Harsh J 2012-11-22, 18:42
Jean-Marc Spaggiari 2012-11-22, 19:09
Jean-Marc Spaggiari 2012-12-01, 02:11
Sorry about that. My fault.
I have put this on the core-site.xml file but should be on the hdfs-site.xml...
I moved it and it's now working fine.
2012/11/30, Jean-Marc Spaggiari <[EMAIL PROTECTED]>:
> Is there a way to ask Hadoop to display its parameters?
> I have updated the property as followed:
> But even if I stop/start hadoop, there is nothing written on the usb
> drive. So I'm wondering if there is a command line like bin/hadoop
> 2012/11/22, Jean-Marc Spaggiari <[EMAIL PROTECTED]>:
>> Perfect. Thanks again for your time!
>> I will first add another drive on the Namenode because this will take
>> 5 minutes. Then I will read about the migration from 1.0.3 to 2.0.x
>> and most probably will use the zookeeper solution.
>> This will take more time, so will be done over the week-end.
>> I lost 2 hard drives this week (2 datanodes), so I'm not a bit
>> concerned about the NameNode data. Just want to secure that a bit
>> 2012/11/22, Harsh J <[EMAIL PROTECTED]>:
>>> Jean-Marc (Sorry if I've been spelling your name wrong),
>>> 0.94 does support Hadoop-2 already, and works pretty well with it, if
>>> that is your only concern. You only need to use the right download (or
>>> if you compile, use the -Dhadoop.profile=23 maven option).
>>> You will need to restart the NameNode to make changes to the
>>> dfs.name.dir property and set it into effect. A reasonably fast disk
>>> is needed for quicker edit log writes (few bytes worth in each round)
>>> but a large, or SSD-style disk is not a requisite. An external disk
>>> would work fine too (instead of an NFS), as long as it is reliable.
>>> You do not need to copy data manually - just ensure that your NameNode
>>> process user owns the directory and it will auto-populate the empty
>>> directory on startup.
>>> Operationally speaking, in case 1/2 disk fails, the NN Web UI (and
>>> metrics as well) will indicate this (see bottom of NN UI page for an
>>> example of what am talking about) but the NN will continue to run with
>>> the lone remaining disk, but its not a good idea to let it run for too
>>> long without fixing/replacing the disk, for you will be losing out on
>>> On Thu, Nov 22, 2012 at 11:59 PM, Jean-Marc Spaggiari
>>> <[EMAIL PROTECTED]> wrote:
>>>> Hi Harsh,
>>>> Again, thanks a lot for all those details.
>>>> I read the previous link and I totally understand the HA NameNode. I
>>>> already have a zookeeper quorum (3 servers) that I will be able to
>>>> re-use. However, I'm running HBase 0.94.2 which is not yet compatible
>>>> (I think) with Hadoop 2.0.x. So I will have to go with a non-HA
>>>> NameNode until I can migrate to a stable 0.96 HBase version.
>>>> Can I "simply" add one directory to dfs.name.dir and restart
>>>> my namenode? Is it going to feed all the required information in this
>>>> directory? Or do I need to copy the data of the existing one in the
>>>> new one before I restart it? Also, does it need a fast transfert rate?
>>>> Or will an exteral hard drive (quick to be moved to another server if
>>>> required) be enought?
>>>> 2012/11/22, Harsh J <[EMAIL PROTECTED]>:
>>>>> Please follow the tips provided at
>>>>> In short, if you use a non-HA NameNode setup:
>>>>> - Yes the NN is a very vital persistence point in running HDFS and its
>>>>> data should be redundantly stored for safety.
>>>>> - You should, in production, configure your NameNode's image and edits
>>>>> disk (dfs.name.dir in 1.x+, or dfs.namenode.name.dir in 0.23+/2.x+) to
Harsh J 2012-12-01, 04:24
ac@...) 2012-12-01, 04:11