@Nick: In my initial production release, I dont think we should have
compaction pain since we wont be doing heavy real-time writes. Now, i m
thinking of using 1-2 spindle out of 10 spindle for temp dir of Hadoop and
HBase. My cluster will only have 2-4 Map slots per node. Thanks for you
PS: corrected the typo of "Roll" to "Role"
On Mon, Dec 17, 2012 at 5:46 PM, Nick Dimiduk <[EMAIL PROTECTED]> wrote:
> On Mon, Dec 17, 2012 at 5:20 PM, anil gupta <[EMAIL PROTECTED]> wrote:
> > @Nick: I am using HBase 0.92.1, CompactionTool.java is part of HBase 0.96
> > as per https://issues.apache.org/jira/browse/HBASE-7253.
> Fair enough; I grepped against trunk.
> I have 10 disks on my slave node that will primarily be used for serving
> > HBase queries(very less MR ). So, i was trying to distribute my disk I/O
> > load evenly among the disk. Will it be fine if i just dedicate 1 disk for
> > hadoop.tmp.dir or 1 disk is also a overkill for hbase.tmp.dir.
> The dedicated IO could help to alleviate compaction pain -- the question
> is, will you experience compaction pain? Does your workload include
> frequent mutations (Puts, Deletes)? If the answer is 'no' (as your
> description above implies), you'll likely not benefit very much from the
> dedicated platter; better use of the spindle will likely be for the
> DataNode. You can probably co-locate tmp.dir with a low-intensity resource.
> Then again, if you have 10 drives, why not?
Thanks & Regards,