Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> Error while Creating Table in Hive


+
Babak Bastan 2012-06-05, 17:13
+
shashwat shriparv 2012-06-05, 18:02
+
Babak Bastan 2012-06-05, 18:13
+
shashwat shriparv 2012-06-05, 18:15
+
Babak Bastan 2012-06-05, 18:20
+
Babak Bastan 2012-06-05, 18:23
+
shashwat shriparv 2012-06-05, 18:34
+
Babak Bastan 2012-06-05, 18:43
+
Babak Bastan 2012-06-05, 19:30
+
Bejoy KS 2012-06-05, 19:55
+
Babak Bastan 2012-06-05, 20:00
+
shashwat shriparv 2012-06-06, 13:32
+
Babak Bastan 2012-06-06, 14:58
+
Mohammad Tariq 2012-06-06, 17:42
+
Mohammad Tariq 2012-06-06, 17:44
+
Babak Bastan 2012-06-06, 17:47
+
Mohammad Tariq 2012-06-06, 17:49
+
Babak Bastan 2012-06-06, 17:52
+
Mohammad Tariq 2012-06-06, 17:59
+
Babak Bastan 2012-06-06, 18:12
Copy link to this message
-
Re: Error while Creating Table in Hive
create a directory "/home/username/hdfs" (or at some place of your
choice)..inside this hdfs directory create three sub directories -
name, data, and temp, then follow these steps :

add following properties in your core-site.xml -

<property>
 <name>fs.default.name</name>
 <value>hdfs://localhost:9000/</value>
</property>

<property>
 <name>hadoop.tmp.dir</name>
 <value>/home/mohammad/hdfs/temp</value>
</property>

then add following two properties in your hdfs-site.xml -

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.name.dir</name>
<value>/home/mohammad/hdfs/name</value>
</property>

<property>
<name>dfs.data.dir</name>
<value>/home/mohammad/hdfs/data</value>
</property>

finally add this property in your mapred-site.xml -

       <property>
 <name>mapred.job.tracker</name>
 <value>hdfs://localhost:9001</value>
</property>

NOTE: you can give any name to these directories of your choice, just
keep in mind you have to give same names as values of
           above specified properties in your configuration files.
(give full path of these directories, not just the name of the
directory)

After this  follow the steps provided in the previous reply.

Regards,
    Mohammad Tariq
On Wed, Jun 6, 2012 at 11:42 PM, Babak Bastan <[EMAIL PROTECTED]> wrote:
> thank's Mohammad
>
> with this command:
>
> babak@ubuntu:~/Downloads/hadoop/bin$ hadoop namenode -format
>
> this is my output:
>
> 12/06/06 20:05:20 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = ubuntu/127.0.1.1
> STARTUP_MSG:   args = [-format]
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 12/06/06 20:05:20 INFO namenode.FSNamesystem:
> fsOwner=babak,babak,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare
> 12/06/06 20:05:20 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/06/06 20:05:20 INFO namenode.FSNamesystem: isPermissionEnabled=true
> 12/06/06 20:05:20 INFO common.Storage: Image file of size 95 saved in 0
> seconds.
> 12/06/06 20:05:20 INFO common.Storage: Storage directory
> /tmp/hadoop-babak/dfs/name has been successfully formatted.
> 12/06/06 20:05:20 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
> ************************************************************/
>
> by this command:
>
> babak@ubuntu:~/Downloads/hadoop/bin$ start-dfs.sh
>
> this is the out put
>
> mkdir: kann Verzeichnis „/home/babak/Downloads/hadoop/bin/../logs“ nicht
> anlegen: Keine Berechtigung
>
> this out put(it's in german and it means no right to make this folder)
>
>
> On Wed, Jun 6, 2012 at 7:59 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>
>> once we are done with the configuration, we need to format the file
>> system..use this command to do that-
>> bin/hadoop namenode -format
>>
>> after this, hadoop daemon processes should be started using following
>> commands -
>> bin/start-dfs.sh (it'll start NN & DN)
>> bin/start-mapred.sh (it'll start JT & TT)
>>
>> after this use jps to check if everything is alright or point your
>> browser to localhost:50070..if you further find any problem provide us
>> with the error logs..:)
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Wed, Jun 6, 2012 at 11:22 PM, Babak Bastan <[EMAIL PROTECTED]> wrote:
>> > were you able to format hdfs properly???
>> > I did'nt get your question,Do you mean HADOOP_HOME? or where did I
>> > install
>> > Hadoop?
>> >
>> > On Wed, Jun 6, 2012 at 7:49 PM, Mohammad Tariq <[EMAIL PROTECTED]>
>> > wrote:
>> >>
>> >> if you are getting only this, it means your hadoop is not
>> >> running..were you able to format hdfs properly???
+
Mohammad Tariq 2012-06-06, 18:25
+
Babak Bastan 2012-06-06, 19:32
+
Mohammad Tariq 2012-06-06, 19:36
+
Babak Bastan 2012-06-06, 19:39
+
Mohammad Tariq 2012-06-06, 19:41
+
Babak Bastan 2012-06-06, 19:55
+
Mohammad Tariq 2012-06-06, 20:04
+
shashwat shriparv 2012-06-06, 20:02
+
Babak Bastan 2012-06-06, 20:12
+
Babak Bastan 2012-06-06, 20:15
+
Mohammad Tariq 2012-06-06, 20:26
+
Babak Bastan 2012-06-06, 20:22
+
Mohammad Tariq 2012-06-06, 20:33
+
Babak Bastan 2012-06-06, 20:52
+
Mohammad Tariq 2012-06-06, 21:21
+
Babak Bastan 2012-06-06, 21:34
+
Mohammad Tariq 2012-06-06, 21:43
+
shashwat shriparv 2012-06-05, 18:19
+
Bejoy Ks 2012-06-05, 17:33
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB