Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Changing dfs.block.size


+
J. Ryan Earl 2011-06-06, 19:09
+
Jeff Bean 2011-06-06, 19:29
+
Jeff Bean 2011-06-06, 19:29
+
Marcos Ortiz 2011-06-06, 19:53
+
Marcos Ortiz 2011-06-06, 19:56
+
J. Ryan Earl 2011-06-06, 21:12
+
Ayon Sinha 2011-06-06, 20:08
+
J. Ryan Earl 2011-06-06, 21:14
Copy link to this message
-
Re: Changing dfs.block.size

On Jun 6, 2011, at 12:09 PM, J. Ryan Earl wrote:

> Hello,
>
> So I have a question about changing dfs.block.size in
> $HADOOP_HOME/conf/hdfs-site.xml.  I understand that when files are created,
> blocksizes can be modified from default.  What happens if you modify the
> blocksize of an existing HDFS site?  Do newly created files get the default
> blocksize and old files remain the same?

Yes.
>  Is there a way to change the
> blocksize of existing files; I'm assuming you could write MapReduce job to
> do it, but any build in facilities?

You can use distcp to copy the files back onto the same fs in a new location.  The new files should be in the new block size.  Now you can move the new files where the old files used to live.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB