Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - Stopping a single Datanode


+
Terry Healy 2012-08-16, 19:11
+
Nitin Pawar 2012-08-16, 19:21
+
Harsh J 2012-08-16, 19:49
+
Terry Healy 2012-08-16, 20:56
Copy link to this message
-
Re: Stopping a single Datanode
Mohammad Tariq 2012-08-16, 21:07
Hello Terry,

    You can ssh the command to the node where you want to stop the DN.
Something like this :
$ cluster@ubuntu:~/hadoop-1.0.3$ bin/hadoop-daemon.sh --config
/home/cluster/hadoop-1.0.3/conf/ stop datanode

Regards,
    Mohammad Tariq

On Fri, Aug 17, 2012 at 2:26 AM, Terry Healy <[EMAIL PROTECTED]> wrote:

> Thanks guys. I will need the decommission in a few weeks, but for now
> just a simple system move. I found out the hard way not to have a
> masters and slaves file in the conf directory of a slave: when I tried
> bin/stop-all.sh, it stopped processes everywhere.
>
> Gave me an idea to list it's own name as the only one in slaves, which
> might work as expected then....but if I can just kill the process that
> is even easier.
>
>
> On 08/16/2012 03:49 PM, Harsh J wrote:
> > Perhaps what you're looking for is the Decommission feature of HDFS,
> > which lets you safely remove a DN without incurring replica loss? It
> > is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> > Chapter 10: Administering Hadoop / Maintenance section - Title
> > "Decommissioning old nodes", or at
> > http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> >
> > On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <[EMAIL PROTECTED]> wrote:
> >> Sorry - this seems pretty basic, but I could not find a reference on
> >> line or in my books. Is there a graceful way to stop a single datanode,
> >> (for example to move the system to a new rack where it will be put back
> >> on-line) or do you just whack the process ID and let HDFS clean up the
> >> mess?
> >>
> >> Thanks
> >>
> >
> >
> >
>