Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> changing the block size


Copy link to this message
-
Re: changing the block size
Can you tell us, how are you verifying if its not working?

Edit

conf/hdfs-site.xml dfs.block.size
 
 
And restart the cluster.

-Bharath
From: Rita <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc:
Sent: Sunday, February 6, 2011 8:50 AM
Subject: Re: changing the block size
Neither one was working.

Is there anything I can do? I always have problems like this in hdfs. It seems even experts are guessing at the answers :-/

On Thu, Feb 3, 2011 at 11:45 AM, Ayon Sinha <[EMAIL PROTECTED]> wrote:

conf/hdfs-site.xml
>
>restart dfs. I believe it should be sufficient to restart the namenode only, but others can confirm.
>
>-Ayon
>
>
>From: Rita <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Sent: Thu, February 3, 2011 4:35:09
> AM
>Subject: changing the block size
>
>
>>Currently I am using the default block size of 64MB. I would like to change it for my cluster to 256 megabytes since I deal with large files (over 2GB).  What is the best way to do this?
>
>What file do I have to make the change on? Does it have to be applied on the namenode or each individual data nodes?  What has to get restarted, namenode, datanode, or both?
>
>
>
>--
>--- Get your facts first, then you can distort them as you please.--
>
>
--
--- Get your facts first, then you can distort them as you please.--
      
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB