Ha -- I just decommissioned some nodes today.
Add the nodes you'd like to decom. to the excludes file (search for
it's name in hdfs-site.xml), usually dfs.exclude. Login to your NN
and issue hadoop dfsadmin -refreshNodes
Watch the NN Web interface until the Decommissioning Nodes are complete.
Then you can remove them from dfs.exclude and refresh nodes again.
Just to keep things cleaned up.
You should also do the same for Job Tracker, it's commands are
similar. hadoop mradmin -refreshNodes
Found in the big Hadoop book from O'Reilly Press (sp?).
On 6/13/13, Dhanasekaran Anbalagan <[EMAIL PROTECTED]> wrote:
> Hi Guys,
> Where I find Decommission document. I need to know how decommission will
> really works. Where I can find decommission logs.
> How to understand decommission process:
> I am planning to remove one of my Datanode in my cluster. I have already
> Data available two other nodes. without data loss removing nodes.
> % How to actual data moving or transferring other nodes.?
> % When I start decommission the decommission initiated node send blocks to
> other machine
> % All the Blocks info will available in Namenode. NN will so replication
> Also I am facing issue very low speed Decommissioning will happening.
> for example I have data in 2TB it's take more than 50 hrs. It's quite
> unnormal. I believe I configured
> dfs.datanode.balance.bandwidthPerSec=1073741824 [1GB bytes per second]
> How to debug, Please guide guys.
> Did I learn something today? If not, I wasted it.