Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # dev >> Problem reading HDFS block size > 1GB


Copy link to this message
-
Re: Problem reading HDFS block size > 1GB
It breaks at 2^31 bytes = 2 GB.  Any size smaller than that should work.

Brian

On Sep 29, 2009, at 4:14 PM, Vinay Setty wrote:

> Hi Owen,
> Thank you for the quick reply. Can you please tell me what is the  
> exact
> limit with which HDFS is known to work?
>
> Vinay
>
> On Tue, Sep 29, 2009 at 8:20 PM, Owen O'Malley <[EMAIL PROTECTED]>  
> wrote:
>
>>
>> On Sep 29, 2009, at 10:59 AM, Vinay Setty wrote:
>>
>> We are running Yahoo distribution of Hadoop based on Hadoop  
>> 0.20.0-2787265
>>> .
>>> On a 10 nodes cluster with OpenSUSE Linux Operating System. We  
>>> have HDFS
>>> configured with Block Size 5GB (This is for our experiments).
>>>
>>
>> There is a known limitation to HDFS to blocks of less than 2^31  
>> bytes.
>> Fixing it would be tedious and no one has signed up to take a pass  
>> at it.
>>
>> -- Owen
>>

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB