Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Is it safe to manually copy BLK files?


Copy link to this message
-
Re: Is it safe to manually copy BLK files?
The short answer is no. If you want to decommission a datanode, the
safest way is to put hostnames of the datanodes you want to shutdown
into a file on the namenode.  Next, set the dfs.hosts.exclude
parameter to point to the file. Finally, run hadoop dfsadmin
-refreshNodes.

As an FYI, I think you misunderstood the dfs.replication parameter.
The setting includes the total number of copies of blocks that you
want, not additional copies. If you want only a single copy of every
block, it should be set to 1, not 0.

Hope that helps.

-Joey

On Mon, May 30, 2011 at 6:44 PM, Rodrigo Vera <[EMAIL PROTECTED]> wrote:
>
> On my current setup I have the replication factor set to 0 and I need to take
> down a machine on my cluster.
>
> Is it safe to manually copy the blk files (along with it's meta) from
> dfs/dn/current to another node?
>
> Greets
> --
> View this message in context: http://old.nabble.com/Is-it-safe-to-manually-copy-BLK-files--tp31736841p31736841.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>

--
Joseph Echeverria
Cloudera, Inc.
443.305.9434
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB