Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Error while Creating Table in Hive


Copy link to this message
-
Re: Error while Creating Table in Hive
also change the permissions of these directories to 777.

Regards,
    Mohammad Tariq
On Wed, Jun 6, 2012 at 11:54 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
> create a directory "/home/username/hdfs" (or at some place of your
> choice)..inside this hdfs directory create three sub directories -
> name, data, and temp, then follow these steps :
>
> add following properties in your core-site.xml -
>
> <property>
>          <name>fs.default.name</name>
>          <value>hdfs://localhost:9000/</value>
>        </property>
>
>        <property>
>          <name>hadoop.tmp.dir</name>
>          <value>/home/mohammad/hdfs/temp</value>
>        </property>
>
> then add following two properties in your hdfs-site.xml -
>
> <property>
>                <name>dfs.replication</name>
>                <value>1</value>
>        </property>
>
>        <property>
>                <name>dfs.name.dir</name>
>                <value>/home/mohammad/hdfs/name</value>
>        </property>
>
>        <property>
>                <name>dfs.data.dir</name>
>                <value>/home/mohammad/hdfs/data</value>
>        </property>
>
> finally add this property in your mapred-site.xml -
>
>       <property>
>          <name>mapred.job.tracker</name>
>          <value>hdfs://localhost:9001</value>
>        </property>
>
> NOTE: you can give any name to these directories of your choice, just
> keep in mind you have to give same names as values of
>           above specified properties in your configuration files.
> (give full path of these directories, not just the name of the
> directory)
>
> After this  follow the steps provided in the previous reply.
>
> Regards,
>     Mohammad Tariq
>
>
> On Wed, Jun 6, 2012 at 11:42 PM, Babak Bastan <[EMAIL PROTECTED]> wrote:
>> thank's Mohammad
>>
>> with this command:
>>
>> babak@ubuntu:~/Downloads/hadoop/bin$ hadoop namenode -format
>>
>> this is my output:
>>
>> 12/06/06 20:05:20 INFO namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = ubuntu/127.0.1.1
>> STARTUP_MSG:   args = [-format]
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build >> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 12/06/06 20:05:20 INFO namenode.FSNamesystem:
>> fsOwner=babak,babak,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare
>> 12/06/06 20:05:20 INFO namenode.FSNamesystem: supergroup=supergroup
>> 12/06/06 20:05:20 INFO namenode.FSNamesystem: isPermissionEnabled=true
>> 12/06/06 20:05:20 INFO common.Storage: Image file of size 95 saved in 0
>> seconds.
>> 12/06/06 20:05:20 INFO common.Storage: Storage directory
>> /tmp/hadoop-babak/dfs/name has been successfully formatted.
>> 12/06/06 20:05:20 INFO namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
>> ************************************************************/
>>
>> by this command:
>>
>> babak@ubuntu:~/Downloads/hadoop/bin$ start-dfs.sh
>>
>> this is the out put
>>
>> mkdir: kann Verzeichnis „/home/babak/Downloads/hadoop/bin/../logs“ nicht
>> anlegen: Keine Berechtigung
>>
>> this out put(it's in german and it means no right to make this folder)
>>
>>
>> On Wed, Jun 6, 2012 at 7:59 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>>
>>> once we are done with the configuration, we need to format the file
>>> system..use this command to do that-
>>> bin/hadoop namenode -format
>>>
>>> after this, hadoop daemon processes should be started using following
>>> commands -
>>> bin/start-dfs.sh (it'll start NN & DN)
>>> bin/start-mapred.sh (it'll start JT & TT)
>>>
>>> after this use jps to check if everything is alright or point your
>>> browser to localhost:50070..if you further find any problem provide us
>>> with the error logs..:)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB