Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> Can anyone help me with large distributed cache files?


Copy link to this message
-
Re: Can anyone help me with large distributed cache files?
Hi Sheng,
By default , cache size is 10GB which means your file can be placed
in distributed cache .If you want more memory configure
  local.cache.size  in mapred-site.xml for bigger value.

On Tue, Jun 12, 2012 at 5:22 AM, Sheng Guo <[EMAIL PROTECTED]> wrote:

> Hi,
>
> Sorry to bother you all, this is my first question here in hadoop user
> mailing list.
> Can anyone help me with the memory configuration if distributed cache is
> very large and requires more memory? (2GB)
>
> And also in this case when distributed cache is very large, how do we
> handle this normally? By configure something to give more memory? or this
> should be avoided?
>
> Thanks
>

--
https://github.com/zinnia-phatak-dev/Nectar
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB