-RE: One petabyte of data loading into HDFS with in 10 min.
Siddharth Tiwari 2012-09-10, 19:22
Well can't you load the incremental data only ? as the goal seems quite unrealistic. The big guns have already spoken :P
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.”
"Maybe other people will try to limit me but I don't limit myself"
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: One petabyte of data loading into HDFS with in 10 min.
Date: Mon, 10 Sep 2012 16:17:20 +0000
Well said Mike. Lots of “funny questions” around here lately…
From: Michael Segel [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 10, 2012 4:50 AM
To: [EMAIL PROTECTED]
Cc: Michael Segel
Subject: Re: One petabyte of data loading into HDFS with in 10 min.
On Sep 10, 2012, at 2:40 AM, prabhu K <[EMAIL PROTECTED]> wrote:
Thanks for the response.
We have loaded 100GB data loaded into HDFS, time taken 1hr.with below configuration.
Each Node (1 machine master, 2 machines are slave)
500 GB hard disk.
3 quad code CPUs.
Speed 1333 MHz
Now, we are planning to load 1 petabyte of data (single file) into Hadoop HDFS and Hive table within 10-20 minutes. For this we need a clarification below.
Some say that I am sometimes too harsh in my criticisms so take what I say with a grain of salt...
You loaded 100GB in an hour using woefully underperforming hardware and are now saying you want to load 1PB in 10 mins.
I would strongly suggest that you first learn more about Hadoop. No really. Looking at your first machine, its obvious that you don't really grok hadoop and what it requires to achieve optimum performance. You couldn't even extrapolate
any meaningful data from your current environment.
Secondly, I think you need to actually think about the problem. Did you mean PB or TB? Because your math seems to be off by a couple orders of magnitude.
A single file measured in PBs? That is currently impossible using today (2012) technology. In fact a single file that is measured in PBs wouldn't exist within the next 5 years and most likely the next decade. [Moore's law is all about CPU
power, not disk density.]
Also take a look at networking.
ToR switch design differs, however current technology, the fabric tends to max out at 40GBs. What's the widest fabric on a backplane?
That's your first bottleneck because even if you had a 1PB of data, you couldn't feed it to the cluster fast enough.
Forget disk. look at PCIe based memory. (Money no object, right? )
You still couldn't populate it fast enough.
I guess Steve hit this nail on the head when he talked about this being a homework assignment.
High school maybe?
1. what are the system configuration setup required for all the 3 machine’s ?.
2. Hard disk size.
3. RAM size.
4. Mother board
5. Network cable
6. How much Gbps Infiniband required.
For the same setup we need cloud computing environment too?
Please suggest and help me on this.
On Fri, Sep 7, 2012 at 7:30 PM, Michael Segel <[EMAIL PROTECTED]> wrote:
Sorry, but you didn't account for the network saturation.
And why 1GBe and not 10GBe? Also which version of hadoop?
Here MapR works well with bonding two 10GBe ports and with the right switch, you could do ok.
Also 2 ToR switches... per rack. etc...
How many machines? 150? 300? more?
Then you don't talk about how much memory, CPUs, what type of storage...
Lots of factors.
I'm sorry to interrupt this mental masturbation about how to load 1PB in 10min.
There is a lot more questions that should be asked that weren't.
Hey but look. Its a Friday, so I suggest some pizza, beer and then take it to a white board.
But what do I know? In a different thread, I'm talking about how to tame HR and Accounting so they let me play with my team Ninja!
On Sep 5, 2012, at 9:56 AM, zGreenfelder <[EMAIL PROTECTED]> wrote: