Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Hive upload


Hi,

Regarding the sqoop import: I noticed you wrote -table instead of
--table (1 dash instead of 2)

Ruslan

On Wed, Jul 4, 2012 at 12:41 PM, Bejoy Ks <[EMAIL PROTECTED]> wrote:
> Hi Yogesh
>
> To add on, looks like the table definition doesn't match with data as well.
>
> Your table defn has 4 columns defined, with 4th column as int
>
> describe formatted letstry;
> OK
> # col_name                data_type               comment
>
> rollno                  int                     None
> name                    string                  None
> numbr                   int                     None
> sno                     int                     None
>
>
> But the data has 5 columns with the 4th column as String
>
> 1,John,123,abc,2
>
>
> Also When you create the table, make sure to specify the right field
> separator
>
> ....
> ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
>  STORED AS TEXTFILE
>
>
> Regards
> Bejoy KS
>
> ________________________________
> From: Bejoy Ks <[EMAIL PROTECTED]>
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Sent: Wednesday, July 4, 2012 1:59 PM
> Subject: Re: Hive upload
>
> Hi Yogesh
>
> Looks like Sqoop import from rdbms to hdfs is getting successful but is
> failing at hive create table. You are seeing data in hive ware house because
> you have specified that as your target dir in sqoop import (--target-dir
> /user/hive/warehouse/new). It is recommended to use a different target dir
> while doing sqoop import other than the hive warehouse dir.
>
> Can you post in the full console log of sqoop with --verbose logging
> enabled. It can give some clues.
>
>
> With the second issue, You already have your data in
> '/user/hive/warehouse/letstry/' which is the location for the hive table
> 'letstry'. Why you still want to do a LOAD DATA again in there?
>
> If you are doing a SQOOP import of that, Again it is recommended to use a
> different target dir other than hive ware house dir. It'll help you avoid
> some confusions as well.
>
>
> ________________________________
> From: yogesh dhari <[EMAIL PROTECTED]>
> To: hive request <[EMAIL PROTECTED]>
> Sent: Wednesday, July 4, 2012 1:40 PM
> Subject: RE: Hive upload
>
>
> Hi Bejoy,
>
> Thank you very much for your response,
>
> 1)
>
> A) When I run command  show tables it doesn't show  newhive table.
> B) Yes the the newhive directory is present into /user/hive/warehouse and
> also containing the values imported from RDBMS
>
> Please suggest and give me an example for the sqoop import command according
> to you for this case.
>
>
> 2)
>
> A) Here is the command
>
> describe formatted letstry;
> OK
> # col_name                data_type               comment
>
> rollno                  int                     None
> name                    string                  None
> numbr                   int                     None
> sno                     int                     None
>
> # Detailed Table Information
> Database:               default
> Owner:                  mediaadmin
> CreateTime:             Tue Jul 03 17:06:27 GMT+05:30 2012
> LastAccessTime:         UNKNOWN
> Protect Mode:           None
> Retention:              0
> Location:               hdfs://localhost:9000/user/hive/warehouse/letstry
> Table Type:             MANAGED_TABLE
> Table Parameters:
>     transient_lastDdlTime    1341315550
>
> # Storage Information
> SerDe Library:          org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> InputFormat:            org.apache.hadoop.mapred.TextInputFormat
> OutputFormat:
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> Compressed:             No
> Num Buckets:            -1
> Bucket Columns:         []
> Sort Columns:           []
> Storage Desc Params:
>     serialization.format    1
> Time taken: 0.101 seconds
>
>
> B) hadoop dfs -ls /user/hive/warehouse/letstry/
> Found 1 items
> -rw-r--r--   1 mediaadmin supergroup         17 2012-07-02 12:05
> /user/hive/warehouse/letstry/part-m-00000
>
> hadoop dfs -cat /user/hive/warehouse/letstry/part-m-00000

Best Regards,
Ruslan Al-Fakikh
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB