Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> changing the block size


Copy link to this message
-
Re: changing the block size
Answer depends on what you are trying to achieve. Assuming you are trying to store a file in HDFS using put or copyFromLocal.
You no need to restart the entire cluster, just Namenode restart is sufficient.  

hadoop-daemon.sh stop namenode
hadoop-daemon.sh start namenode

-Bharath
    
From: Rita <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]; Bharath Mundlapudi <[EMAIL PROTECTED]>
Cc:
Sent: Sunday, February 6, 2011 2:24 PM
Subject: Re: changing the block size
Bharath,
So, I have to restart the entire cluster? So, I need to stop the namenode and then run start-dfs.sh ?

Ayon,
So, what I did was decommission a node, remove all of its data (rm -rf data.dir) and stopped the hdfs process on it. Then I made the change to conf/hdfs-site.xml on the data node and then I restarted the datanode. I then ran a balancer to take effect and I am still getting 64MB files instead of 128MB. :-/
On Sun, Feb 6, 2011 at 2:25 PM, Bharath Mundlapudi <[EMAIL PROTECTED]> wrote:

Can you tell us, how are you verifying if its not working?
>
>Edit
>
>>conf/hdfs-site.xml dfs.block.size
>
>
>And restart the cluster.
>
>-Bharath
>
>
>
>
>From: Rita <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Cc:
>Sent: Sunday, February 6, 2011 8:50 AM
>
>Subject: Re: changing the block size
>
>
>
>Neither one was working.
>
>Is there anything I can do? I always have problems like this in hdfs. It seems even experts are guessing at the answers :-/
>
>
>
>On Thu, Feb 3, 2011 at 11:45 AM, Ayon Sinha <[EMAIL PROTECTED]> wrote:
>
>conf/hdfs-site.xml
>>
>>restart dfs. I believe it should be sufficient to restart the namenode only, but others can confirm.
>>
>>-Ayon
>>
>>
>>From: Rita <[EMAIL PROTECTED]>
>>To: [EMAIL PROTECTED]
>>Sent: Thu, February 3, 2011 4:35:09
>> AM
>>Subject: changing the block size
>>
>>
>>>>Currently I am using the default block size of 64MB. I would like to change it for my cluster to 256 megabytes since I deal with large files (over 2GB).  What is the best way to do this?
>>
>>What file do I have to make the change on? Does it have to be applied on the namenode or each individual data nodes?  What has to get restarted, namenode, datanode, or both?
>>
>>
>>
>>--
>>--- Get your facts first, then you can distort them as you please.--
>>
>>
>
>
>--
>--- Get your facts first, then you can distort them as you please.--
>
>
>
>
--
--- Get your facts first, then you can distort them as you please.--
      
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB