Ok here are the problem(s). Thrift has frame size limits, thrift has to
buffer rows into memory.
Hove thrift has a heap size, it needs to big in this case.
Your client needs a big heap size as well.
The way to do this query if it is possible may be turning row lateral,
potwntially by treating it as a list, it will make queries on it awkward.
On Thursday, January 30, 2014, Stephen Sprague <[EMAIL PROTECTED]> wrote:
rcfile, orc or custom)? "show create table <table>" would yield that.
don't select all the columns that could very well limit the size of the
"row" being returned and hence the size of the internal ArrayList. OTOH,
if you're using "select *", um, you have my sympathies. :)
side. And, well, sure looks like a memory issue. :) rather than an
inherent hive limitation that is.
be interested in knowing next is is this via running hive in local mode,
correct? (eg. not through hiveserver1/2). And it looks like it boinks on
array processing which i assume to be internal code arrays and not hive
data arrays - your 15K columns are all scalar/simple types, correct? Its
clearly fetching results and looks be trying to store them in a java array
- and not just one row but a *set* of rows (ArrayList)
the controller of that. I woulda hoped it was called something like
"HIVE_HEAPSIZE". :) Anyway, can't hurt to try.
is it 10K? is it 5K? The idea is to confirm its _the number of columns_
that is causing the memory to blow and not some other artifact unbeknownst
otherwise control the number of rows stored at once in Hive's internal
buffer. I snoop around too.
list will know something or another about this. :)
or hive 0.10.0
hyperthreading so 4 cores per machine) + 16Gb Ram each
(ProcessFunction.java:process(41)) - Internal error processing FetchResults
Sorry this was sent from mobile. Will do less grammar and spell check than