Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> changing the block size


+
Rita 2011-02-03, 12:35
+
Ayon Sinha 2011-02-03, 16:45
+
Allen Wittenauer 2011-02-03, 22:40
+
Rita 2011-02-06, 16:50
+
Ayon Sinha 2011-02-06, 17:14
+
Bharath Mundlapudi 2011-02-06, 19:25
+
Rita 2011-02-06, 22:24
+
Ayon Sinha 2011-02-06, 22:31
+
Rita 2011-02-06, 22:35
+
Ayon Sinha 2011-02-06, 22:34
Copy link to this message
-
Re: changing the block size

On Feb 6, 2011, at 2:24 PM, Rita wrote:
> So, what I did was decommission a node, remove all of its data (rm -rf
> data.dir) and stopped the hdfs process on it. Then I made the change to
> conf/hdfs-site.xml on the data node and then I restarted the datanode. I
> then ran a balancer to take effect and I am still getting 64MB files instead
> of 128MB. :-/
Right.

As previously mentioned, changing the block size does not change the blocks of the previously written files.  In other words, changing the block size does not act as a merging function at the datanode level.  In order to change pre-existing files, you'll need to copy the files to a new location, delete the old ones, then mv the new versions back.
+
Bharath Mundlapudi 2011-02-07, 00:45
+
Bharath Mundlapudi 2011-02-07, 00:53
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB