Replication, block size, etc. are all per-file and pure client
supplied properties. They either take their default from the client
config, or directly from an API argument override.
On Sun, Jul 14, 2013 at 4:14 PM, varun kumar <[EMAIL PROTECTED]> wrote:
> What Shumin told is correct,hadoop configurations has been over written
> through client application.
> We have faced similar type of issue,Where default replication factor was
> mentioned 2 in hadoop configuration.But when when ever the client
> application writes a files,it was having 3 copies in hadoop cluster.Later on
> checking client application it's default replica factor has 3.
> On Sun, Jul 14, 2013 at 4:51 AM, Shumin Guo <[EMAIL PROTECTED]> wrote:
>> I Think the client side configuration will take effect.
>> On Jul 12, 2013 11:50 AM, "Shalish VJ" <[EMAIL PROTECTED]> wrote:
>>> Suppose block size set in configuration file at client side is 64MB,
>>> block size set in configuration file at name node side is 128MB and block
>>> size set in configuration file at datanode side is something else.
>>> Please advice, If the client is writing a file to hdfs,which property
>>> would be executed.
> Varun Kumar.P