Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> is hadoop suitable for us?


Copy link to this message
-
Re: is hadoop suitable for us?
You are going to have to put HDFS on top of your SAN.

The issue is that you introduce overhead and latencies by having attached storage rather than the drives physically on the bus within the case.

Also I'm going to assume that your SAN is using RAID.
One of the side effects of using a SAN is that you could reduce your replication factor from 3 to 2.
(The SAN already protects you from disk failures if you're using RAID)
On May 17, 2012, at 11:10 PM, Pierre Antoine DuBoDeNa wrote:

> You used HDFS too? or storing everything on SAN immediately?
>
> I don't have number of GB/TB (it might be about 2TB so not really that
> "huge") but they are more than 100 million documents to be processed. In a
> single machine currently we can process about 200.000 docs/day (several
> parsing, indexing, metadata extraction has to be done). So in the worst
> case we want to use the 50 VMs to distribute the processing..
>
> 2012/5/17 Sagar Shukla <[EMAIL PROTECTED]>
>
>> Hi PA,
>>    In my environment, we had a SAN storage and I/O was pretty good. So if
>> you have similar environment then I don't see any performance issues.
>>
>> Just out of curiosity - what amount of data are you looking forward to
>> process ?
>>
>> Regards,
>> Sagar
>>
>> -----Original Message-----
>> From: Pierre Antoine Du Bois De Naurois [mailto:[EMAIL PROTECTED]]
>> Sent: Thursday, May 17, 2012 8:29 PM
>> To: [EMAIL PROTECTED]
>> Subject: Re: is hadoop suitable for us?
>>
>> Thanks Sagar, Mathias and Michael for your replies.
>>
>> It seems we will have to go with hadoop even if I/O will be slow due to
>> our configuration.
>>
>> I will try to update on how it worked for our case.
>>
>> Best,
>> PA
>>
>>
>>
>> 2012/5/17 Michael Segel <[EMAIL PROTECTED]>
>>
>>> The short answer is yes.
>>> The longer answer is that you will have to account for the latencies.
>>>
>>> There is more but you get the idea..
>>>
>>> Sent from my iPhone
>>>
>>> On May 17, 2012, at 5:33 PM, "Pierre Antoine Du Bois De Naurois" <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> We have large amount of text files that we want to process and index
>>> (plus
>>>> applying other algorithms).
>>>>
>>>> The problem is that our configuration is share-everything while
>>>> hadoop
>>> has
>>>> a share-nothing configuration.
>>>>
>>>> We have 50 VMs and not actual servers, and these share a huge
>>>> central storage. So using HDFS might not be really useful as
>>>> replication will not help, distribution of files have no meaning as
>>>> all files will be again located in the same HDD. I am afraid that
>>>> I/O will be very slow with or without HDFS. So i am wondering if it
>>>> will really help us to use hadoop/hbase/pig etc. to distribute and
>>>> do several parallel tasks.. or is "better" to install something
>>>> different (which i am not sure what). We heard myHadoop is better
>>>> for such kind of configurations, have any clue about it?
>>>>
>>>> For example we now have a central mySQL to check if we have already
>>>> processed a document and keeping there several metadata. Soon we
>>>> will
>>> have
>>>> to distribute it as there is not enough space in one VM, But
>>>> Hadoop/HBase will be useful? we don't want to do any complex
>>>> join/sort of the data, we just want to do queries to check if
>>>> already processed a document, and if not to add it with several of
>> it's metadata.
>>>>
>>>> We heard sungrid for example is another way to go but it's
>>>> commercial. We are somewhat lost.. so any help/ideas/suggestions are
>> appreciated.
>>>>
>>>> Best,
>>>> PA
>>>>
>>>>
>>>>
>>>> 2012/5/17 Abhishek Pratap Singh <[EMAIL PROTECTED]>
>>>>
>>>>> Hi,
>>>>>
>>>>> For your question if HADOOP can be used without HDFS, the answer is
>> Yes.
>>>>> Hadoop can be used with any kind of distributed file system.
>>>>> But I m not able to understand the problem statement clearly to
>>>>> advice
>>> my
>>>>> point of view.
>>>>> Are you processing text file and saving in distributed database??
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB