Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
HDFS >> mail # user >> Changing dfs.block.size


+
J. Ryan Earl 2011-06-06, 19:09
Copy link to this message
-
Re: Changing dfs.block.size
hadoop fs -setrep

Sent from my iPhone

On Jun 6, 2011, at 12:09 PM, "J. Ryan Earl" <[EMAIL PROTECTED]> wrote:

> Hello,
>
> So I have a question about changing dfs.block.size in $HADOOP_HOME/conf/hdfs-site.xml.  I understand that when files are created, blocksizes can be modified from default.  What happens if you modify the blocksize of an existing HDFS site?  Do newly created files get the default blocksize and old files remain the same?  Is there a way to change the blocksize of existing files; I'm assuming you could write MapReduce job to do it, but any build in facilities?
>
> Thanks,
> -JR
>
>
+
Jeff Bean 2011-06-06, 19:29
+
Marcos Ortiz 2011-06-06, 19:53
+
Marcos Ortiz 2011-06-06, 19:56
+
J. Ryan Earl 2011-06-06, 21:12
+
Ayon Sinha 2011-06-06, 20:08
+
J. Ryan Earl 2011-06-06, 21:14
+
Allen Wittenauer 2011-06-06, 22:05
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB