there's always a use case out there that stretches the imagination isn't
there?   gotta love it.

first things first.  can you share the error message? the hive version? and
the number of nodes in your cluster?

then a couple of things come to my mind.   Might you consider pivoting the
data such that you represent one row of 15K columns as  15K rows as, say, 3
columns (id, column_name, column_value) before you even load it into hive?

the other thing is when i hear 15K columns the first thing i think is HBase
(their motto is millions of columns and billions of rows)

Anyway, lets see what you got for the first question! :)

On Tue, Jan 28, 2014 at 3:20 AM, David Gayou <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB