Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop, mail # user - Re: NameNode low on available disk space


Copy link to this message
-
Re: NameNode low on available disk space
Harsh J 2013-01-23, 16:58
The logs display it in simple bytes. If the issue begins to occur when you
start using Hadoop, then its most certainly MR using up the disk space
temporarily.

You could lower the threshold, or you could perhaps use a bigger disk for
your trials/more nodes.
On Wed, Jan 23, 2013 at 10:25 PM, Mohit Vadhera <
[EMAIL PROTECTED]> wrote:

> MR operation are running on the same machine. i checked the parameter "
> mapred.local.dir" in my installed directory /etc/hadoop/ but didn't find .
> One question the disk space reserved size displayed in logs in KB or MB ?
> I am layman on hadoop. The link I followed to install is given below
>
>
> https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode
>
> Thanks,
>
>
>
>
> On Wed, Jan 23, 2013 at 10:12 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>
>> A random switching behavior can only be explained by a fluctuating disk
>> space I'd think. Are you running MR operations on the same disk (i.e. is it
>> part of mapred.local.dir as well)?
>>
>>
>> On Wed, Jan 23, 2013 at 9:24 PM, Mohit Vadhera <
>> [EMAIL PROTECTED]> wrote:
>>
>>> NN switches randomly into the safemode then I run command to leave
>>> safemode manually. I never got alerts for low disk space on machine level
>>> and i didn't see the space fluctuates GBs into MBs .
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Jan 23, 2013 at 9:10 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>>>
>>>> Mohit,
>>>>
>>>> When do you specifically get the error at the NN? Does your NN
>>>> consistently not start with that error?
>>>>
>>>> Your local disk space availability can certainly fluctuate if you use
>>>> the same disk for MR and other activity which creates temporary files.
>>>>
>>>>
>>>> On Wed, Jan 23, 2013 at 9:01 PM, Mohit Vadhera <
>>>> [EMAIL PROTECTED]> wrote:
>>>>
>>>>> Can somebody answer me on this plz ?
>>>>>
>>>>>
>>>>> On Wed, Jan 23, 2013 at 11:44 AM, Mohit Vadhera <
>>>>> [EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Thanks Guys, As you said the level is already pretty low i.e 100 MB
>>>>>> but in my case the root fs / has 14 G available.  What can be the root
>>>>>> cause then ?
>>>>>>
>>>>>> /dev/mapper/vg_operamast1-lv_root
>>>>>>                        50G   33G   14G  71% /
>>>>>>
>>>>>> As per logs.
>>>>>>     2013-01-21 01:22:52,217 WARN
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space
>>>>>> available on volume '/dev/mapper/vg_operamast1-lv_root' is 10653696, which
>>>>>> is below the configured reserved amount 104857600
>>>>>>
>>>>>>
>>>>>> On Wed, Jan 23, 2013 at 11:13 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>>>>>
>>>>>>> Hi again,
>>>>>>>
>>>>>>> Yes, you need to add it to hdfs-site.xml and restart the NN.
>>>>>>>
>>>>>>> > Thanks Harsh, Do I need to add parameters in hdfs-site.xml and
>>>>>>> restart service namenode.
>>>>>>> > +  public static final String  DFS_NAMENODE_DU_RESERVED_KEY >>>>>>> "dfs.namenode.resource.du.
>>>>>>> reserved";
>>>>>>> > +  public static final long    DFS_NAMENODE_DU_RESERVED_DEFAULT >>>>>>> 1024 * 1024 * 100; // 100 MB
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jan 23, 2013 at 10:12 AM, Harsh J <[EMAIL PROTECTED]>wrote:
>>>>>>>
>>>>>>>> Edit your hdfs-site.xml (or whatever place of config your NN uses)
>>>>>>>> to lower the value of property "dfs.namenode.resource.du.reserved". Create
>>>>>>>> a new property if one does not exist, and set the value of space to a
>>>>>>>> suitable level. The default itself is pretty low - 100 MB in bytes.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Jan 23, 2013 at 9:13 AM, Mohit Vadhera <
>>>>>>>> [EMAIL PROTECTED]> wrote:
>>>>>>>>
>>>>>>>>> Ok Steve. I am forwarding my issue again to the list that you
>>>>>>>>> said. The version is
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> Namenode switches into safemode when it has low disk space on the
>>>>>>>>> root fs / i have to manually run a command to leave it. Below are log
>>>>>>>>> messages for low space on root / fs. Is there any parameter so that i can
Harsh J