The short answer is no. If you want to decommission a datanode, the
safest way is to put hostnames of the datanodes you want to shutdown
into a file on the namenode. Next, set the dfs.hosts.exclude
parameter to point to the file. Finally, run hadoop dfsadmin
As an FYI, I think you misunderstood the dfs.replication parameter.
The setting includes the total number of copies of blocks that you
want, not additional copies. If you want only a single copy of every
block, it should be set to 1, not 0.
Hope that helps.
On Mon, May 30, 2011 at 6:44 PM, Rodrigo Vera <[EMAIL PROTECTED]> wrote:
> On my current setup I have the replication factor set to 0 and I need to take
> down a machine on my cluster.
> Is it safe to manually copy the blk files (along with it's meta) from
> dfs/dn/current to another node?
> View this message in context: http://old.nabble.com/Is-it-safe-to-manually-copy-BLK-files--tp31736841p31736841.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.