Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Re: NameNode low on available disk space


Copy link to this message
-
Re: NameNode low on available disk space
Harsh J 2013-01-23, 16:42
A random switching behavior can only be explained by a fluctuating disk
space I'd think. Are you running MR operations on the same disk (i.e. is it
part of mapred.local.dir as well)?
On Wed, Jan 23, 2013 at 9:24 PM, Mohit Vadhera <[EMAIL PROTECTED]
> wrote:

> NN switches randomly into the safemode then I run command to leave
> safemode manually. I never got alerts for low disk space on machine level
> and i didn't see the space fluctuates GBs into MBs .
>
>
>
>
>
> On Wed, Jan 23, 2013 at 9:10 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>
>> Mohit,
>>
>> When do you specifically get the error at the NN? Does your NN
>> consistently not start with that error?
>>
>> Your local disk space availability can certainly fluctuate if you use the
>> same disk for MR and other activity which creates temporary files.
>>
>>
>> On Wed, Jan 23, 2013 at 9:01 PM, Mohit Vadhera <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Can somebody answer me on this plz ?
>>>
>>>
>>> On Wed, Jan 23, 2013 at 11:44 AM, Mohit Vadhera <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> Thanks Guys, As you said the level is already pretty low i.e 100 MB but
>>>> in my case the root fs / has 14 G available.  What can be the root cause
>>>> then ?
>>>>
>>>> /dev/mapper/vg_operamast1-lv_root
>>>>                        50G   33G   14G  71% /
>>>>
>>>> As per logs.
>>>>     2013-01-21 01:22:52,217 WARN
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space
>>>> available on volume '/dev/mapper/vg_operamast1-lv_root' is 10653696, which
>>>> is below the configured reserved amount 104857600
>>>>
>>>>
>>>> On Wed, Jan 23, 2013 at 11:13 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>>>
>>>>> Hi again,
>>>>>
>>>>> Yes, you need to add it to hdfs-site.xml and restart the NN.
>>>>>
>>>>> > Thanks Harsh, Do I need to add parameters in hdfs-site.xml and
>>>>> restart service namenode.
>>>>> > +  public static final String  DFS_NAMENODE_DU_RESERVED_KEY >>>>> "dfs.namenode.resource.du.
>>>>> reserved";
>>>>> > +  public static final long    DFS_NAMENODE_DU_RESERVED_DEFAULT >>>>> 1024 * 1024 * 100; // 100 MB
>>>>>
>>>>>
>>>>> On Wed, Jan 23, 2013 at 10:12 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Edit your hdfs-site.xml (or whatever place of config your NN uses) to
>>>>>> lower the value of property "dfs.namenode.resource.du.reserved". Create a
>>>>>> new property if one does not exist, and set the value of space to a
>>>>>> suitable level. The default itself is pretty low - 100 MB in bytes.
>>>>>>
>>>>>>
>>>>>> On Wed, Jan 23, 2013 at 9:13 AM, Mohit Vadhera <
>>>>>> [EMAIL PROTECTED]> wrote:
>>>>>>
>>>>>>> Ok Steve. I am forwarding my issue again to the list that you said.
>>>>>>> The version is
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Namenode switches into safemode when it has low disk space on the
>>>>>>> root fs / i have to manually run a command to leave it. Below are log
>>>>>>> messages for low space on root / fs. Is there any parameter so that i can
>>>>>>> reduce reserved amount.Hadoop 2.0.0-cdh4.1.2
>>>>>>>
>>>>>>> 2013-01-21 01:22:52,217 WARN
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space
>>>>>>> available on volume '/dev/mapper/vg_lv_root' is 10653696, which is below
>>>>>>> the configured reserved amount 104857600
>>>>>>> 2013-01-21 01:22:52,218 WARN
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on
>>>>>>> available disk space. Entering safe mode.
>>>>>>> 2013-01-21 01:22:52,218 INFO org.apache.hadoop.hdfs.StateChange:
>>>>>>> STATE* Safe mode is ON.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jan 23, 2013 at 2:50 AM, Steve Loughran <
>>>>>>> [EMAIL PROTECTED]> wrote:
>>>>>>>
>>>>>>>> [EMAIL PROTECTED]ist
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Harsh J
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Harsh J
>>
>
>
--
Harsh J