Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> Starting Abnormally After Shutting Down For Some Time


Copy link to this message
-
Re: Starting Abnormally After Shutting Down For Some Time
Dear all,

I found some configuration information was saved in /tmp in my system. So
when some of the information is lost, the HBase cannot be started normally.

But in my system, I have tried to change the HDFS directory to another
location. Why are there still some files under /tmp?

To change the HDFS directory, the hdfs-site.xml is updated as follows. What
else should I do for moving all the configurations out of /tmp?

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <configuration>
      <property>
        <name>dfs.replication</name>
        <value>1</value>
      </property>
      <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/libing/GreatFreeLabs/Hadoop/FS</value>
      </property>
      <property>
        <name>dfs.name.dir</name>
        <value>${hadoop.tmp.dir}/dfs/name/</value>
      </property>
      <property>
        <name>dfs.data.dir</name>
        <value>${hadoop.tmp.dir}/dfs/data/</value>
      </property>
   </configuration>

Thanks so much!

Best,
Bing

On Wed, Mar 28, 2012 at 4:24 PM, Bing Li <[EMAIL PROTECTED]> wrote:

> Dear Manish,
>
> I appreciate so much for your replies!
>
> The system tmp directory is changed to anther location in my hdfs-site.xml.
>
> If I ran $HADOOP_HOME/bin/start-all.sh, all of the services were listed,
> including job tracker and task tracker.
>
>     10211 SecondaryNameNode
>     10634 Jps
>     9992 DataNode
>     10508 TaskTracker
>     10312 JobTracker
>     9797 NameNode
>
> In the job tracker's log, one exception was found.
>
>        org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/libing/GreatFreeLab
> s/Hadoop/FS/mapred/system. Name node is in safe mode.
>
> In my system, I didn't see the directory, ~/mapred. How should I configure
> for it?
>
> For the properties you listed, they were not set in my system. Are they
> required? Since they have default values (
> http://hbase.apache.org/docs/r0.20.6/hbase-conf.html), do I need to
> update them?
>
>      - hbase.zookeeper.property.clientPort.
>      - hbase.zookeeper.quorum.
>      - hbase.zookeeper.property.dataDir
>
> Now the system was reinstalled. At least, the pseudo-distributed mode runs
> well. I also tried to shut down the ubuntu machine and started it again.
> The system worked fine. But I worried the master-related problem must
> happen if the machine was shutdown for more time. I really don't understand
> the reason.
>
> Thanks so much!
>
> Best,
> Bing
>
> On Wed, Mar 28, 2012 at 3:11 PM, Manish Bhoge <[EMAIL PROTECTED]>wrote:
>
>> Bing,
>>
>> As per my experience on the configuration I can list down some points one
>> of which may be your solution.
>>
>> - first and foremost don't store your service metadata into system tmp
>> directory because it may get cleaned up in every start and you loose all
>> your job tracker, datanode information. It is as good as you're formatting
>> your namenode.
>> - if you're using CDH make sure you set up permission perfectly for root,
>> dfs data directory and mapred directories.(Refer CDH documentation)
>> - I didn't see job tracker in your service list. It should be up and
>> running. Check the job tracker log if there is any permission issue when
>> starting job tracker and task tracker.
>> - before trying your stuff on Hbase set up make sure all your Hadoop
>> services are up and running. You can check that by running a sample program
>> and check whether job tracker, task tracker responding for your
>> mapred.system and mapred.local directories to create intermediate files.
>> - once you have all hadoop services up don't set/change any permission.
>>
>> As far as Hbase configuration is concerned there are 2 path for set up:
>> either you set up zookeeper within hbase-site.xml Or configure separately
>> via zoo.cfg. If you are going with hbase setting for zookeeper then confirm
>> following setting:
>> - hbase.zookeeper.property.clientPort.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB