Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> Creating Indexes again


+
Peter Marron 2012-11-23, 08:47
Copy link to this message
-
Re: Creating Indexes again
try increasing ulimit on your hadoop cluster as well increase the memory
for map and reducer both by setting them up on hive
set mapred.job.map.memory.mb=6000;
set mapred.job.reduce.memory.mb=4000;

you can change the values based on the hadoop cluster you have setup

On Fri, Nov 23, 2012 at 2:17 PM, Peter Marron <
[EMAIL PROTECTED]> wrote:

>  Hi,****
>
> ** **
>
> I’m trying to create indexes in Hive, and I’ve switched****
>
> to using CDH-4. The creation of the index is failing and****
>
> it’s pretty obvious that the reducers are running out of****
>
> heap space. When I use the web interface for the****
>
> “Hadoop reduce task list” I can find this entry:****
>
> ** **
>
> Error: Java heap space****
>
> Error: GC overhead limit exceeded****
>
> org.apache.hadoop.io.SecureIOUtils$AlreadyExistsException: EEXIST: File
> exists****
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:178)*
> ***
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:303)****
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:376)****
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)****
>
>         at java.security.AccessController.doPrivileged(Native Method)****
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> ****
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)****
>
> Caused by: EEXIST: File exists****
>
>         at org.apache.hadoop.io.nativeio.NativeIO.open(Native Method)****
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:172)*
> ***
>
>         ... 7 more****
>
> ** **
>
> Error: GC overhead limit exceeded****
>
> ** **
>
> *Al*
>
> ** **
>
> If Also when the****
>
> If Also when the****
>
> ** **
>
> ** **
>
> If Also when the****
>
> ** **
>
> If this e-mail shouldn’t be here and should only be on****
>
> a cloudera mailing list, please re-direct me.****
>
> ** **
>
> Thanks in advance.****
>
> ** **
>
> Peter Marron****
>
> Trillium Software UK Limited****
>
> ** **
>
> Tel : +44 (0) 118 940 7609****
>
> Fax : +44 (0) 118 940 7699****
>
> E: [EMAIL PROTECTED] <[EMAIL PROTECTED]>**
> **
>
> ** **
>

--
Nitin Pawar
+
Peter Marron 2012-11-23, 08:58
+
Nitin Pawar 2012-11-23, 09:30
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB