Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> Re: [SQLWindowing] Windowing function output path syntax (#26)


+
neelesh gadhia 2013-02-22, 22:05
+
Butani, Harish 2013-02-22, 22:44
+
neelesh gadhia 2013-02-22, 22:55
+
Butani, Harish 2013-02-23, 06:40
+
neelesh gadhia 2013-02-23, 07:00
+
neelesh gadhia 2013-02-25, 04:26
+
Butani, Harish 2013-02-25, 05:22
+
neelesh gadhia 2013-02-25, 06:01
Copy link to this message
-
Re: [SQLWindowing] Windowing function output path syntax (#26)
How are you setting the value? It needs to be set in bytes.

From: neelesh gadhia <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
Reply-To: neelesh gadhia <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
Date: Sunday, February 24, 2013 10:01 PM
To: SAP SAP <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
Cc: "[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>" <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>, "[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>" <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>>
Subject: Re: [SQLWindowing] Windowing function output path syntax (#26)

treid 16 and 32 mb and get a different error.

hive> Set hive.ptf.partition.persistence.memsize=16;
hive> select mid, tdate, tamt,sum(tamt) as com_sum over (rows between unbounded preceding and current row)
    > from t_enc
    > distribute by mid
    > sort by mid, tdate;

1.TS :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name -> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

2.SEL :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name -> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

3.RS :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name -> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

4.EX :
RowResolver::
    columns:[t_enc._col0, t_enc._col1, t_enc._col2, t_enc._col3, t_enc._col4]
    Aliases:[
        t_enc:[mid -> _col0, tdate -> _col1, tamt -> _col2, block__offset__inside__file -> _col3, input__file__name -> _col4
    ]
    columns mapped to expressions:[
    ]

5.PTF :
RowResolver::
    columns:[<null>.com_sum, t_enc._col0, t_enc._col1, t_enc._col2, t_enc._col3, t_enc._col4]
    Aliases:[
        :[(tok_function sum (tok_table_or_col tamt) (tok_windowspec (tok_windowrange (preceding unbounded) current))) -> com_sum
        t_enc:[mid -> _col0, tdate -> _col1, tamt -> _col2, block__offset__inside__file -> _col3, input__file__name -> _col4
    ]
    columns mapped to expressions:[
        (TOK_FUNCTION sum (TOK_TABLE_OR_COL tamt) (TOK_WINDOWSPEC (TOK_WINDOWRANGE (preceding unbounded) current))) -> (TOK_FUNCTION sum (TOK_TABLE_OR_COL tamt) (TOK_WINDOWSPEC (TOK_WINDOWRANGE (preceding unbounded) current)))
    ]
1.TS :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name -> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

2.SEL :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name -> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

3.RS :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name -> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

4.EX :
RowResolver::
    columns:[t_enc._col0, t_enc._col1, t_enc._col2, t_enc._col3, t_enc._col4]
    Aliases:[
        t_enc:[mid -> _col0, tdate -> _col1, tamt -> _col2, block__offset__inside__file -> _col3, input__file__name -> _col4
    ]
    columns mapped to expressions:[
    ]

5.PTF :
RowResolver::
    columns:[<null>.com_sum, t_enc._col0, t_enc._col1, t_enc._col2, t_enc._col3, t_enc._col4]
    Aliases:[
        :[(tok_function sum (tok_table_or_col tamt) (tok_windowspec (tok_windowrange (preceding unbounded) current))) -> com_sum
        t_enc:[mid -> _col0, tdate -> _col1, tamt -> _col2, block__offset__inside__file -> _col3, input__file__name -> _col4
    ]
    columns mapped to expressions:[
        (TOK_FUNCTION sum (TOK_TABLE_OR_COL tamt) (TOK_WINDOWSPEC (TOK_WINDOWRANGE (preceding unbounded) current))) -> (TOK_FUNCTION sum (TOK_TABLE_OR_COL tamt) (TOK_WINDOWSPEC (TOK_WINDOWRANGE (preceding unbounded) current)))
    ]

6.SEL :
RowResolver::
    columns:[<null>._col0, <null>._col1, <null>._col2, <null>._col3]
    Aliases:[
        <null>:[mid -> _col0, tdate -> _col1, tamt -> _col2, com_sum -> _col3
    ]
    columns mapped to expressions:[
    ]

7.FS :
RowResolver::
    columns:[<null>._col0, <null>._col1, <null>._col2, <null>._col3]
    Aliases:[
        <null>:[mid -> _col0, tdate -> _col1, tamt -> _col2, com_sum -> _col3
    ]
    columns mapped to expressions:[
    ]

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201302242150_0002, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201302242150_0002
Kill Command = /usr/local/Cellar/hadoop/1.1.1/libexec/bin/../bin/hadoop job  -kill job_201302242150_0002
Hadoop job information for Stage-1: number of map
+
neelesh gadhia 2013-02-25, 18:55
+
Butani, Harish 2013-02-26, 03:57