Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Hive running out of memory


Copy link to this message
-
Hive running out of memory
I have a table with 3 levels of partitioning and about 10,000 files (one
file at every 'leaf'). I am using EMR and the table is stored in S3.
For some reason, Hive can't even start running a simple query that creates a
local copy of a subset of the big table.

Does this look like an EMR-specific issue or is there something I could do?
I am thinking about copying al of the data into HDFS first.

Number of reduce tasks is set to 0 since there's no reduce operator
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
exceeded
at java.util.LinkedHashMap.newKeyIterator(LinkedHashMap.java:396)
at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at
java.beans.java_util_Map_PersistenceDelegate.initialize(MetaData.java:516)
at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)
at
java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:393)
at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)
at
java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:393)
at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:100)
at java.beans.Encoder.writeObject(Encoder.java:54)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
at java.beans.Encoder.writeExpression(Encoder.java:279)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:97)
at java.beans.Encoder.writeObject(Encoder.java:54)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
at java.beans.Encoder.writeExpression(Encoder.java:279)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
at
java.beans.DefaultPersistenceDelegate.doProperty(DefaultPersistenceDelegate.java:212)
at
java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:247)
at
java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:395)
at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:100)
at java.beans.Encoder.writeObject(Encoder.java:54)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
at java.beans.Encoder.writeExpression(Encoder.java:279)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:97)
at java.beans.Encoder.writeObject(Encoder.java:54)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
at java.beans.Encoder.writeExpression(Encoder.java:279)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
at
java.beans.java_util_Map_PersistenceDelegate.initialize(MetaData.java:523)
at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB