Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> can the parameters dfs.block.size and dfs.replication be different from one file to the other


Copy link to this message
-
Re: can the parameters dfs.block.size and dfs.replication be different from one file to the other
Hello Shahab,

Thanks for the reply. Typically, to invoke the HDFS client, I will use
"bin/haddop dfs ...". But the command that you used "hadoop fs ...". makes
me wonder what this is the Hadoop 2.* client commands. Could you clarify
for me such "-D fs.local.block.size" is supported in Hadoop 1.1. or not?

Thank you!

Jun

On Tue, Sep 10, 2013 at 11:38 AM, Shahab Yunus <[EMAIL PROTECTED]>wrote:

> "can be set at the time I load the file to the HDFS (that is, it is the
> client side setting)? "
> I don't think you can do this while reading. These are done at the time of
> writing.
>
> You can do it like this (the example is for CLI as evident):
>
> hadoop fs -D fs.local.block.size=134217728 -put local_name remote_location
>
> Same is applicable with replication property.
>
> So given that, you I think you have to modify the FileOutputFormat (and
> other 'writing' classes) to allow these to be configurable at the time
> files are being generated by M/R
>
> Regards,
> Shahab
>
>
> On Tue, Sep 10, 2013 at 2:08 PM, Jun Li <[EMAIL PROTECTED]> wrote:
>
>> Hi,
>>
>> I am trying to evaluate the MapReduce with different setting. I wonder
>> whether the following two HDFS parameters:
>>
>> *dfs.block.size
>> *dfs.replication
>>
>> can be set at the time I load the file to the HDFS (that is, it is the
>> client side setting)?  or these are the system parameter settings that can
>> not be changed from the HDFS client invocation.
>>
>>
>> I am using Hadoop 1.1.2 (the recent stable release), rather than the new
>> Hadoop 2.x. By reading the  Cloudera document, I wonder even if such
>> parameters can be set per HDFS client, will it be supported only after
>> certain Hadoop version?
>>
>> Thank you!
>>
>> Jun
>>
>>
>