Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> Error while Creating Table in Hive


+
Babak Bastan 2012-06-05, 17:13
+
shashwat shriparv 2012-06-05, 18:02
+
Babak Bastan 2012-06-05, 18:13
+
shashwat shriparv 2012-06-05, 18:15
+
Babak Bastan 2012-06-05, 18:20
+
Babak Bastan 2012-06-05, 18:23
+
shashwat shriparv 2012-06-05, 18:34
+
Babak Bastan 2012-06-05, 18:43
+
Babak Bastan 2012-06-05, 19:30
+
Bejoy KS 2012-06-05, 19:55
+
Babak Bastan 2012-06-05, 20:00
+
shashwat shriparv 2012-06-06, 13:32
+
Babak Bastan 2012-06-06, 14:58
+
Mohammad Tariq 2012-06-06, 17:42
+
Mohammad Tariq 2012-06-06, 17:44
+
Babak Bastan 2012-06-06, 17:47
+
Mohammad Tariq 2012-06-06, 17:49
+
Babak Bastan 2012-06-06, 17:52
+
Mohammad Tariq 2012-06-06, 17:59
+
Babak Bastan 2012-06-06, 18:12
+
Mohammad Tariq 2012-06-06, 18:24
+
Mohammad Tariq 2012-06-06, 18:25
+
Babak Bastan 2012-06-06, 19:32
Copy link to this message
-
Re: Error while Creating Table in Hive
could you post your logs???that would help me in understanding the
problem properly.

Regards,
    Mohammad Tariq
On Thu, Jun 7, 2012 at 1:02 AM, Babak Bastan <[EMAIL PROTECTED]> wrote:
> Thank you very much mohamad for your attention.I followed the steps but the
> error is the same as the last time.
> and there is my hosts file:
>
> 127.0.0.1       localhost
> #127.0.0.1      ubuntu.ubuntu-domain    ubuntu
>
>
> # The following lines are desirable for IPv6 capable hosts
>
> #::1     ip6-localhost ip6-loopback
> #fe00::0 ip6-localnet
> #ff00::0 ip6-mcastprefix
> #ff02::1 ip6-allnodes
> #ff02::2 ip6-allrouters
>
> but no effect :(
>
> On Wed, Jun 6, 2012 at 8:25 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:
>>
>> also change the permissions of these directories to 777.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Wed, Jun 6, 2012 at 11:54 PM, Mohammad Tariq <[EMAIL PROTECTED]>
>> wrote:
>> > create a directory "/home/username/hdfs" (or at some place of your
>> > choice)..inside this hdfs directory create three sub directories -
>> > name, data, and temp, then follow these steps :
>> >
>> > add following properties in your core-site.xml -
>> >
>> > <property>
>> >          <name>fs.default.name</name>
>> >          <value>hdfs://localhost:9000/</value>
>> >        </property>
>> >
>> >        <property>
>> >          <name>hadoop.tmp.dir</name>
>> >          <value>/home/mohammad/hdfs/temp</value>
>> >        </property>
>> >
>> > then add following two properties in your hdfs-site.xml -
>> >
>> > <property>
>> >                <name>dfs.replication</name>
>> >                <value>1</value>
>> >        </property>
>> >
>> >        <property>
>> >                <name>dfs.name.dir</name>
>> >                <value>/home/mohammad/hdfs/name</value>
>> >        </property>
>> >
>> >        <property>
>> >                <name>dfs.data.dir</name>
>> >                <value>/home/mohammad/hdfs/data</value>
>> >        </property>
>> >
>> > finally add this property in your mapred-site.xml -
>> >
>> >       <property>
>> >          <name>mapred.job.tracker</name>
>> >          <value>hdfs://localhost:9001</value>
>> >        </property>
>> >
>> > NOTE: you can give any name to these directories of your choice, just
>> > keep in mind you have to give same names as values of
>> >           above specified properties in your configuration files.
>> > (give full path of these directories, not just the name of the
>> > directory)
>> >
>> > After this  follow the steps provided in the previous reply.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Wed, Jun 6, 2012 at 11:42 PM, Babak Bastan <[EMAIL PROTECTED]>
>> > wrote:
>> >> thank's Mohammad
>> >>
>> >> with this command:
>> >>
>> >> babak@ubuntu:~/Downloads/hadoop/bin$ hadoop namenode -format
>> >>
>> >> this is my output:
>> >>
>> >> 12/06/06 20:05:20 INFO namenode.NameNode: STARTUP_MSG:
>> >> /************************************************************
>> >> STARTUP_MSG: Starting NameNode
>> >> STARTUP_MSG:   host = ubuntu/127.0.1.1
>> >> STARTUP_MSG:   args = [-format]
>> >> STARTUP_MSG:   version = 0.20.2
>> >> STARTUP_MSG:   build >> >> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> >> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> >> ************************************************************/
>> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem:
>> >> fsOwner=babak,babak,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare
>> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem: supergroup=supergroup
>> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem: isPermissionEnabled=true
>> >> 12/06/06 20:05:20 INFO common.Storage: Image file of size 95 saved in 0
>> >> seconds.
>> >> 12/06/06 20:05:20 INFO common.Storage: Storage directory
>> >> /tmp/hadoop-babak/dfs/name has been successfully formatted.
>> >> 12/06/06 20:05:20 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >> /************************************************************
+
Babak Bastan 2012-06-06, 19:39
+
Mohammad Tariq 2012-06-06, 19:41
+
Babak Bastan 2012-06-06, 19:55
+
Mohammad Tariq 2012-06-06, 20:04
+
shashwat shriparv 2012-06-06, 20:02
+
Babak Bastan 2012-06-06, 20:12
+
Babak Bastan 2012-06-06, 20:15
+
Mohammad Tariq 2012-06-06, 20:26
+
Babak Bastan 2012-06-06, 20:22
+
Mohammad Tariq 2012-06-06, 20:33
+
Babak Bastan 2012-06-06, 20:52
+
Mohammad Tariq 2012-06-06, 21:21
+
Babak Bastan 2012-06-06, 21:34
+
Mohammad Tariq 2012-06-06, 21:43
+
shashwat shriparv 2012-06-05, 18:19
+
Bejoy Ks 2012-06-05, 17:33