Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive, mail # user - Hive error: Unable to deserialize reduce input key


+
praveenesh kumar 2012-03-23, 13:05
+
曹坤 2012-09-07, 02:43
+
praveenesh kumar 2012-09-07, 04:21
Copy link to this message
-
Re: Hive error: Unable to deserialize reduce input key
Navis류승우 2012-09-07, 05:14
I've tried to deserialize your data.

0 = bigint = -6341068275337623706
1 = string = TTFVUFHFH
2 = int = -1037822201
3 = int = -1467607277
4 = int = -1473682089
5 = int = -1337884091
6 = string = I
7 = string = IVH ISH
8 = int = -1321908327
9 = int = -1475321453
10 = int = -1476394752
11 = string = sv
12 = string = UUQ
13 = string = THTPW
14 = string = VU
15 = string = IQQIH
16 = string = S
17 = string = VFH
18 = string = PP
19 = string = PRQWIRUV
20 = string = H
21 = double = NaN
Exception in thread "main" java.io.EOFException

Could you discern columns having invalid value?

2012/9/7 praveenesh kumar <[EMAIL PROTECTED]>

> I am not sure, what can be the issue...I had it long back and got no
> response. I tried these things:
>
> 1. Increased the Child JVM heap size.
> 2. Reduced the number of reducers for the job.
> 3. Check whether your disks are not getting full while running the query.
> 3. Checked my data again. I think many times the error comes because of
> dirty data. 1 easy way to check whether data is clean or not is to count
> the number of delimiters/row. Sometimes,there are some other control
> characters instead of space that we can't see in normal text editors, use
> vi to check those also. Simple python hadoop streaming or pig scripts can
> help you to do that.
>
> Probably someone in community can give better answer to the exact problem.
>
> I hope it would help.
>
> Regards,
> Praveenesh
>
>
>
>
> On Fri, Sep 7, 2012 at 8:13 AM, 曹坤 <[EMAIL PROTECTED]> wrote:
>
>> Hi  praveenesh kumar  :
>> I am getting the same error today.
>> Do you have any solution ?
>>
>>
>> 2012/3/23 praveenesh kumar <[EMAIL PROTECTED]>
>>
>>> Hi all,
>>>
>>> I am getting this following error when I am trying to do select ...with
>>> group by operation.I am grouping on around 25 columns
>>>
>>> java.lang.RuntimeException:
>>> org.apache.hadoop.hive.ql.metadata.HiveException:
>>> Hive Runtime Error: Unable to deseralize reduce input key from
>>> x1x128x0x0x0x0x1x254x174x1x49x55x52x46x50x53x52x46x49x46x48x0x1x142x145x93x11x1x128x87x4x73x1x128x32x107x137x1x130x165x214x131x1x49x0x1x51x48x48x120x53x48x0x1x132x11x106x192x1x128x13x178x250x1x128x0x1x0x1x78x86x0x1x55x48x50x0x1x56x57x48x53x52x0x1x50x48x54x0x1x49x51x51x55x51x0x1x48x0x1x48x46x48x0x1x48x0x1x49x55x53x55x52x54x56x55x0x1x48x0x1x0x1x0x1x0x1x0x255
>>> ...
>>>
>>>
>>> Detailed logs...
>>>
>>> 2012-03-23 06:31:42,187 FATAL ExecReducer:
>>> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error:
>>> Unable to deserialize reduce input key from
>>> x1x128x0x0x0x0x0x87x66x1x54x54x46x56x55x46x48x46x48x0x1x142x124x217x207x1x128x86x17x13x1x128x29x65x57x1x130x141x82x245x1x49x0x1x49x56x48x120x49x53x48x0x1x131x235x47x199x1x128x10x161x93x1x128x0x1x0x1x73x76x0x1x55x55x51x0x1x54x48x54x50x57x0x1x56x55x0x1x49x51x51x49x48x0x1x53x0x1x56x46x48x0x1x50x50x0x1x50x52x51x57x49x52x55x56x0x1x48x0x1x0x1x0x1x0x1x0x255
>>> with properties
>>> {columns=_col0,_col1,_col2,_col3,_col4,_col5,_col6,_col7,_col8,_col9,_col10,_col11,_col12,_col13,_col14,_col15,_col16,_col17,_col18,_col19,_col20,_col21,_col22,_col23,_col24,
>>> serialization.sort.order=+++++++++++++++++++++++++,
>>> columns.types=bigint,string,int,int,int,int,string,string,int,int,int,string,string,string,string,string,string,string,string,string,string,double,string,string,double}
>>>                 at
>>> org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:204)
>>>                 at
>>> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
>>>                 at
>>> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
>>>                 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>>>                 at java.security.AccessController.doPrivileged(Native
>>> Method)
>>>                 at javax.security.auth.Subject.doAs(Subject.java:396)
>>>                 at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>>                 at org.apache.hadoop.mapred.Child.main(Child.java:249)