Travis Crawford 2010-07-21, 19:45
Allen Wittenauer 2010-07-21, 21:01
Alex Loddengaard 2010-07-21, 21:09
Allen Wittenauer 2010-07-21, 21:15
On Wed, Jul 21, 2010 at 2:01 PM, Allen Wittenauer
<[EMAIL PROTECTED]> wrote:
> On Jul 21, 2010, at 12:45 PM, Travis Crawford wrote:
> > Does anyone else run into machines with overfull disks?
> It was a common problem when I was at Yahoo!. As the drives get more full, the NN starts getting slower and slower, since it is going to have problems with block placement.
> > Any tips on how to avoid getting into this situation?
> What we started to do was two-fold:
> a) During every maintenance, we'd blow away the mapred temp dirs. The TaskTracker does a very bad job of cleaning up after jobs and there is usually a lot of cruft. If you have a 'flat' disk/fs structure such that MR temp and HDFS is shared, this is a huge problem.
> b) Blowing away /tmp on a regular basis. Here at LI, I've got a perl script that I wrote that reads the output of ls /tmp, finds files/dirs older than 3 days, and removes them. Since pig is a little piggy and leaves a ton of useless data in /tmp, I often see 15TB or more disappear just by doing this.
> > /dev/cciss/c0d0 275G 217G 45G 83% /data/disk000
> The bigger problem is that Hadoop just really doesn't work well with such small filesystems. You might want to check your fs reserved size. You might be able to squeak out a bit more space that way too.
> > /dev/cciss/c0d14 275G 248G 14G 95% /data/disk014
> I'd probably shutdown this data node and manually move blocks off of this drive onto ...
> > /dev/cciss/c1d1p1 275G 184G 78G 71% /data/disk025
> > /dev/cciss/c1d2p1 275G 176G 86G 68% /data/disk026
> > /dev/cciss/c1d3p1 275G 178G 84G 68% /data/disk027
> > /dev/cciss/c1d4p1 275G 177G 85G 68% /data/disk028
> > /dev/cciss/c1d5p1 275G 179G 83G 69% /data/disk029
> > /dev/cciss/c1d6p1 275G 181G 81G 70% /data/disk030
> ... one of these.
Thanks for the tips! Interestingly, there is some cruft that's built
up but its actually not that much total space.
We've had to shut datanodes down once before to manually move blocks
around, sounds like that's going to happen again this time too.
I'll file a jira about this, since its come up twice now. Last time,
disks added to existing cluster nodes. The balancer did not move data
around as they were all "balanced" -- even though 25 disks eventually
reached 100% usage, and 5 disks were pretty much empty.
What would this feature look like? Datanodes already have some idea of
how much space is available per-disk -- would it be appropriate to
weight less-full disks more heavily for writes? Of course, an empty
disk shouldn't get hammered with writes so this would need to be a
preference for less used disks.
Allen Wittenauer 2010-07-21, 22:06
Travis Crawford 2010-07-21, 23:47