Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> Re: Block size


Copy link to this message
-
Re: Block size
XG,

The newer default is 128 MB [HDFS-4053]. The minimum, however, can be
as low as io.bytes.per.checksum (default: 512 bytes) if the user so
wishes it. To administratively set a limit to prevent low values from
being used, see the config introduced via HDFS-4305.

On Sat, Jan 4, 2014 at 11:38 AM, Zhao, Xiaoguang
<[EMAIL PROTECTED]> wrote:
> As I am new to hdfs, I was told that the minimize block size is 64M, is it
> correct?
>
> XG
>
> 在 2014年1月4日,3:12,"German Florez-Larrahondo" <[EMAIL PROTECTED]> 写道:
>
> Also note that the block size in recent releases is actually called
> “dfs.blocksize” as opposed to “dfs.block.size”, and that you can set it per
> job as well. In that scenario, just pass it as an argument to your job (e.g.
> Hadoop bla –D dfs.blocksize= 134217728)
>
>
>
> Regards
>
>
>
> From: David Sinclair [mailto:[EMAIL PROTECTED]]
> Sent: Friday, January 03, 2014 10:47 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Block size
>
>
>
> Change the dfs.block.size in hdfs-site.xml to be the value you would like if
> you want to have all new files have a different block size.
>
>
>
> On Fri, Jan 3, 2014 at 11:37 AM, Kurt Moesky <[EMAIL PROTECTED]> wrote:
>
> I see the default block size for HDFS is 64 MB, is this a value that can be
> changed easily?
>
>

--
Harsh J
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB