-Re: how to close hadoop when tmp files were cleared
Harsh J 2013-06-17, 09:09
The -9 (SIGKILL) is unnecessary and isn't recommended unless its
unresponsive. The SIGTERM has an additional benefit of running any
necessary shutdown handling procedures, but SIGKILL is instant death.
On Mon, Jun 17, 2013 at 2:34 PM, Azuryy Yu <[EMAIL PROTECTED]> wrote:
> ps aux|grep java , you can find pid, then just 'kill -9' to stop the Hadoop
> On Mon, Jun 17, 2013 at 4:34 PM, Harsh J <[EMAIL PROTECTED]> wrote:
>> Just send the processes a SIGTERM signal (regular kill). Its what the
>> script does anyway. Ensure to change the PID directory before the next
>> restart though.
>> On Mon, Jun 17, 2013 at 1:09 PM, <[EMAIL PROTECTED]> wrote:
>> > Hi，
>> > My hadoop cluster has been running for a period of time. Now i want to
>> > close it for some system changes. But the command "bin/stop-all.sh"
>> > shows
>> > "no jobtracker to stop","no tasktracker to stop","no namenode to stop"
>> > and
>> > "no datanode to stop". I use "jps" got nothing but jps itself. However,
>> > hadoop is indeed running.I think it may be some tmp files about hadoop
>> > had
>> > been cleared by operation system. Could someone tell me how to stop
>> > hadoop
>> > in case of no data files breaks ?
>> > Any guidance would be greatly appreciated. Thanks!
>> > Jeff
>> > --------------------------------------------------------
>> > ZTE Information Security Notice: The information contained in this mail
>> > (and
>> > any attachment transmitted herewith) is privileged and confidential and
>> > is
>> > intended for the exclusive use of the addressee(s). If you are not an
>> > intended recipient, any disclosure, reproduction, distribution or other
>> > dissemination or use of the information contained is strictly
>> > prohibited.
>> > If you have received this mail in error, please delete it and notify us
>> > immediately.
>> Harsh J