Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Application errors with one disk on datanode getting filled up to 100%


Copy link to this message
-
Re: Application errors with one disk on datanode getting filled up to 100%
Sandeep/Mayank,

If you take a look at the volume selection parts of the code, you can
notice it is simply round robin. There's no way we continuously may select
the same disk, unless the disk is deselected for errors (tolerated) or
space (due to lack or reservation). Its better to monitor for a pattern and
look for a misconfiguration, rather than suspect a bug and also accept the
behavior.

Rahul,

The current HDFS version received a better inter-disk balancing code that
I've seen in use already. See
https://issues.apache.org/jira/browse/HDFS-1804 for more info.
On Fri, Jun 14, 2013 at 4:45 PM, Sandeep L <[EMAIL PROTECTED]>wrote:

> Rahul,
>
> In general this issue happens some times in Hadoop. There is no exact
> reason for this.
> To mitigate this you need to run balancer in regular intervals.
>
> Thanks,
> Sandeep.
>
> ------------------------------
> Date: Fri, 14 Jun 2013 16:39:02 +0530
> Subject: Re: Application errors with one disk on datanode getting filled
> up to 100%
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
>
>
> No, as of this moment we've no ideas about the reasons for that behavior.
>
>
> On Fri, Jun 14, 2013 at 4:04 PM, Rahul Bhattacharjee <
> [EMAIL PROTECTED]> wrote:
>
> Thanks Mayank, Any clue on why was only one disk was getting all writes.
>
> Rahul
>
>
> On Thu, Jun 13, 2013 at 11:47 AM, Mayank <[EMAIL PROTECTED]> wrote:
>
> So we did a manual rebalance (followed instructions at:
> http://wiki.apache.org/hadoop/FAQ#On_an_individual_data_node.2C_how_do_you_balance_the_blocks_on_the_disk.3F)
> and also reserved 30 GB of space for non dfs usage via
> dfs.datanode.du.reserved and restarted our apps.
>
> Things have been going fine till now.
>
> Keeping fingers crossed :)
>
>
> On Wed, Jun 12, 2013 at 12:58 PM, Rahul Bhattacharjee <
> [EMAIL PROTECTED]> wrote:
>
> I have a few points to make , these may not be very helpful for the said
> problem.
>
> +All data nodes are bad exception is kind of not pointing to the problem
> related to disk space full.
> +hadoop.tmp.dir acts as base location of other hadoop related properties ,
> not sure if any particular directory is created specifically.
> +Only one disk getting filled looks strange.The other disk are part while
> formatting the NN.
>
> Would be interesting to know the reason for this.
> Please keep posted.
>
> Thanks,
> Rahul
>
>
> On Mon, Jun 10, 2013 at 3:39 PM, Nitin Pawar <[EMAIL PROTECTED]>wrote:
>
> From the snapshot, you got around 3TB for writing data.
>
> Can you check individual datanode's storage health.
> As you said you got 80 servers writing parallely to hdfs, I am not sure
> can that be an issue.
> As suggested in past threads, you can do a rebalance of the blocks but
> that will take some time to finish and will not solve your issue right
> away.
>
> You can wait for others to reply. I am sure there will be far better
> solutions from experts for this.
>
>
> On Mon, Jun 10, 2013 at 3:18 PM, Mayank <[EMAIL PROTECTED]> wrote:
>
> No it's not a map-reduce job. We've a java app running on around 80
> machines which writes to hdfs. The error that I'd mentioned is being thrown
> by the application and yes we've replication factor set to 3 and following
> is status of hdfs:
>
> Configured Capacity : 16.15 TB DFS Used : 11.84 TB Non DFS Used : 872.66
> GB DFS Remaining : 3.46 TB DFS Used% : 73.3 % DFS Remaining% : 21.42 % Live
> Nodes<http://hmaster.production.indix.tv:50070/dfsnodelist.jsp?whatNodes=LIVE> :10 Dead
> Nodes<http://hmaster.production.indix.tv:50070/dfsnodelist.jsp?whatNodes=DEAD>
> : 0  Decommissioning Nodes<http://hmaster.production.indix.tv:50070/dfsnodelist.jsp?whatNodes=DECOMMISSIONING>
> : 0 Number of Under-Replicated Blocks : 0
>
>
> On Mon, Jun 10, 2013 at 3:11 PM, Nitin Pawar <[EMAIL PROTECTED]>wrote:
>
> when you say application errors out .. does that mean your mapreduce job
> is erroring? In that case apart from hdfs space you will need to look at
> mapred tmp directory space as well.

Harsh J