Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS, mail # user - Re: What should I do with a 48-node cluster


+
Michael Segel 2012-12-22, 16:01
+
Jay 2012-12-22, 20:12
+
Mark Kerzner 2012-12-23, 00:48
Copy link to this message
-
Re: What should I do with a 48-node cluster
Ted Dunning 2012-12-23, 02:48
The infiniband connection to the disk unit might help with the I/O issue.
 The memory is a bit tight, but should be possible to make it work.
On Sat, Dec 22, 2012 at 5:11 PM, Edward Capriolo <[EMAIL PROTECTED]>wrote:

> You do not absolutely need more ram.  You do not know your workload yet. A
> standard hadoop machine has 8 disks 16 GB RAM, 8 cores.
>
> In the old days, you would dedicate map slots and reduce slots map 3 map 1
> reduce in your case. Give each of them 256 RAM for child jvm ops. So you
> needed more ram in that case that you have 8 cores, but you do not.
>
> In the end blades are not the ideal hadoop machine because users usually
> want many disks for lots of IO, but it is ok for kicking around.
>
>
> On Sat, Dec 22, 2012 at 7:53 PM, Mark Kerzner <[EMAIL PROTECTED]>wrote:
>
>> Edward, thank you for the practical recommendations. I am going to visit
>> the cluster in its current home in a few days, and I will keep this in
>> mind. Meanwhile, my specs are below
>>
>> 48 HP 1U blades, each has two 2.44 GHz. Dual core AMD Opterons with Cisco
>> Infiniband NICs, 4GB RAM
>>
>> 1 HP cluster controller with SCSI controller
>>
>> 1 HP RSA20 storage array with approx 1Tb of storage
>>
>> Cisco Infiniband 20Gbit optical network router
>>
>> In Compaq racks with four 30 amp 220 volt circuits
>>
>> All wiring and cabling.
>>
>> I am worried about 4 GB RAM on data nodes not being enough. Upgrading the
>> master nodes is bearable, but any memory upgrade on the complete cluster
>> will sure cost, when multiplied by 50.
>>
>> Thank you. Sincerely,
>> Mark
>>
>> On Fri, Dec 21, 2012 at 5:50 PM, Edward Capriolo <[EMAIL PROTECTED]>wrote:
>>
>>> Three year old blade center is ok. A three year old blade is probably a
>>> 64 bit machine. 2 to 4 gb RAM 2 SCSI disks. Maybe two socket two core. Two
>>> blade centers is about 8u or a quarter cabinet and you can find a hosting
>>> provider in your price range.
>>>
>>> Especially if you can get the hardware at a low initial cost you crush
>>> the cloud providers. Buying your own gear takes about a year to recoup
>>> costs over amazons pay per use model.
>>>
>>> Blade centers are usually 20 to 30 amp fully loaded though so if your
>>> crushing word count at home your power bill is gonna get $.
>>>
>>>
>>>
>>> On Friday, December 21, 2012, Mark Kerzner <[EMAIL PROTECTED]>
>>> wrote:
>>> > True!
>>> >
>>> > I am thinking of either my (small) office, or actually hosting for
>>> under $500/month.
>>> >
>>> > On Fri, Dec 21, 2012 at 1:37 PM, Lance Norskog <[EMAIL PROTECTED]>
>>> wrote:
>>> >>
>>> >> You will also be raided by the DEA- too much power for a residence.
>>> >>
>>> >> On 12/20/2012 07:56 AM, Ted Dunning wrote:
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Dec 20, 2012 at 7:38 AM, Michael Segel <
>>> [EMAIL PROTECTED]> wrote:
>>> >>>
>>> >>> While Ted ignores that the world is going to end before X-Mas, he
>>> does hit the crux of the matter head on.
>>> >>> If you don't have a place to put it, the cost of setting it up would
>>> kill you, not to mention that you can get newer hardware which is better
>>> suited for less.
>>> >>> Having said that... if you live in the frozen tundra like Montana,
>>> or some place like ... er Canada or Siberia... , it may make more sense to
>>> use it to heat your home with it.
>>> >>> Just think of the side benefits from all that potential additional
>>> compute power....  :-P
>>> >>
>>> >> I can say from experience that the sound of a bunch of servers in a
>>> home setting is a novel one that is probably unlike anything you have known
>>> before.
>>> >> If you haven't experienced that, then taking on these servers could
>>> be classified as novelty seeking behavior.
>>> >
>>> >
>>>
>>
>>
>