Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> Re: change hdfs block size for file existing on HDFS


Copy link to this message
-
Re: change hdfs block size for file existing on HDFS
Hi Anurag,

The easiest option would be , in your map reduce job set the dfs.block.size to 128 mb

------Original Message------
From: Anurag Tangri
To: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
ReplyTo: [EMAIL PROTECTED]
Subject: change hdfs block size for file existing on HDFS
Sent: Jun 26, 2012 11:07

Hi,
We have a situation where all files that we have are 64 MB block size.
I want to change these files (output of a map job mainly) to 128 MB blocks.

What would be good way to do this migration from 64 mb to 128 mb block
files ?

Thanks,
Anurag Tangri

Regards
Bejoy KS

Sent from handheld, please excuse typos.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB