Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Starting Abnormally After Shutting Down For Some Time


Copy link to this message
-
Re: Starting Abnormally After Shutting Down For Some Time
Bing:
Your pid file location can be setup via hbase-env.sh; default is /tmp ...

# The directory where pid files are stored. /tmp by default.
# export HBASE_PID_DIR=/var/hadoop/pids
On Wed, Mar 28, 2012 at 3:04 PM, Peter Vandenabeele
<[EMAIL PROTECTED]> wrote:
> On Wed, Mar 28, 2012 at 9:53 PM, Bing Li <[EMAIL PROTECTED]> wrote:
>> Dear Peter,
>>
>> When I just started the Ubuntu machine, there was nothing in /tmp.
>>
>> After starting $HADOOP/bin/start-dfs.sh and $HBase/bin/start-hbase.sh, the
>> following files were under /tmp. Do you think anything wrong? Thanks!
>>
>> libing@greatfreeweb:/tmp$ ls -alrt
>> total 112
>> drwxr-xr-x 22 root   root    4096 2012-03-28 14:17 ..
>> -rw-r--r--  1 libing libing     5 2012-03-29 04:48
>> hadoop-libing-namenode.pid
>> -rw-r--r--  1 libing libing     5 2012-03-29 04:48
>> hadoop-libing-datanode.pid
>> -rw-r--r--  1 libing libing     5 2012-03-29 04:48
>> hadoop-libing-secondarynamenode.pid
>> -rw-r--r--  1 libing libing     5 2012-03-29 04:48
>> hbase-libing-zookeeper.pid
>> drwxr-xr-x  3 libing libing  4096 2012-03-29 04:48 hbase-libing
>> -rw-r--r--  1 libing libing     5 2012-03-29 04:48 hbase-libing-master.pid
>> -rw-r--r--  1 libing libing     5 2012-03-29 04:48
>> hbase-libing-regionserver.pid
>> drwxr-xr-x  2 libing libing  4096 2012-03-29 04:48 hsperfdata_libing
>> drwxrwxrwt  4 root   root    4096 2012-03-29 04:48 .
>> -rw-r--r--  1 libing libing 71819 2012-03-29 04:48
>> jffi5395899026867792565.tmp
>> libing@greatfreeweb:/tmp$
>>
>> Best,
>> Bing
>
> Hmmm, all these files are owned by user 'libing' ...
> that is different from my set-up.
>
> Which manual are you exactly following for the pseudo-distributed
> installation? In the Cloudera manual that I followed (cdh3u2) there
> was a mention of making different users IIRC.
>
> Also, in my set-up the hadoop is started automatically at boot-up
> from the scripts in
>
> /etc/rc2.d/S20hadoop-...
>
> where user root then performs an su to user
> => hdfs for the Name, Secondaryname, Data node
> => mapred for the Jobtracker and Tasktracker.
>
> I am not sure it is actually the intention that you start the
> 5 hadoop processes with the
>
> "... $HADOOP/bin/start-dfs.sh ..."
>
> command as you describe.
>
> I stop and start them with
>
> sudo /etc/init.d/hadoop-0.20-datanode {stop|start}
> sudo /etc/init.d/hadoop-0.20-namenode {stop|start}
> sudo /etc/init.d/hadoop-0.20-secondarynamenode {stop|start}
> sudo /etc/init.d/hadoop-0.20-tasktracker {stop|start}
> sudo /etc/init.d/hadoop-0.20-jobtracker {stop|start}
>
> and this seems to be stable for now :-)
>
> But maybe the manual that you follow gives other advise ?
>
> HTH (not sure, I am a beginner too ...)
>
> Peter