Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Plain View
MapReduce, mail # user - RE: How to configure mapreduce archive size?


+
Xia_Yang@... 2013-04-10, 20:59
+
Arun C Murthy 2013-04-10, 21:44
Copy link to this message
-
Re: How to configure mapreduce archive size?
Hemanth Yamijala 2013-04-11, 07:28
Could you paste the contents of the directory ? Not sure whether that will
help, but just giving it a shot.

What application are you using ? Is it custom MapReduce jobs in which you
use Distributed cache (I guess not) ?

Thanks
Hemanth
On Thu, Apr 11, 2013 at 3:34 AM, <[EMAIL PROTECTED]> wrote:

> Hi Arun,****
>
> ** **
>
> I stopped my application, then restarted my hbase (which include hadoop).
> After that I start my application. After one evening, my /tmp/hadoop-root/mapred/local/archive
> goes to more than 1G. It does not work.****
>
> ** **
>
> Is this the right place to change the value?****
>
> ** **
>
> "local.cache.size" in file core-default.xml, which is in
> hadoop-core-1.0.3.jar****
>
> ** **
>
> Thanks,****
>
> ** **
>
> Jane****
>
> ** **
>
> *From:* Arun C Murthy [mailto:[EMAIL PROTECTED]]
> *Sent:* Wednesday, April 10, 2013 2:45 PM
>
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: How to configure mapreduce archive size?****
>
> ** **
>
> Ensure no jobs are running (cache limit is only for non-active cache
> files), check after a little while (takes sometime for the cleaner thread
> to kick in).****
>
> ** **
>
> Arun****
>
> ** **
>
> On Apr 11, 2013, at 2:29 AM, <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
> wrote:****
>
>
>
> ****
>
> Hi Hemanth,****
>
>  ****
>
> For the hadoop 1.0.3, I can only find "local.cache.size" in file
> core-default.xml, which is in hadoop-core-1.0.3.jar. It is not in
> mapred-default.xml.****
>
>  ****
>
> I updated the value in file default.xml and changed the value to 500000.
> This is just for my testing purpose. However, the folder
> /tmp/hadoop-root/mapred/local/archive already goes more than 1G now. Looks
> like it does not do the work. Could you advise if what I did is correct?**
> **
>
>  ****
>
>   <name>local.cache.size</name>****
>
>   <value>500000</value>****
>
>  ****
>
> Thanks,****
>
>  ****
>
> Xia****
>
>  ****
>
> *From:* Hemanth Yamijala [mailto:[EMAIL PROTECTED]]
> *Sent:* Monday, April 08, 2013 9:09 PM
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: How to configure mapreduce archive size?****
>
>  ****
>
> Hi,****
>
>  ****
>
> This directory is used as part of the 'DistributedCache' feature. (
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#DistributedCache).
> There is a configuration key "local.cache.size" which controls the amount
> of data stored under DistributedCache. The default limit is 10GB. However,
> the files under this cannot be deleted if they are being used. Also, some
> frameworks on Hadoop could be using DistributedCache transparently to you.
> ****
>
>  ****
>
> So you could check what is being stored here and based on that lower the
> limit of the cache size if you feel that will help. The property needs to
> be set in mapred-default.xml.****
>
>  ****
>
> Thanks****
>
> Hemanth****
>
>  ****
>
> On Mon, Apr 8, 2013 at 11:09 PM, <[EMAIL PROTECTED]> wrote:****
>
> Hi,****
>
>  ****
>
> I am using hadoop which is packaged within hbase -0.94.1. It is hadoop
> 1.0.3. There is some mapreduce job running on my server. After some time, I
> found that my folder /tmp/hadoop-root/mapred/local/archive has 14G size.**
> **
>
>  ****
>
> How to configure this and limit the size? I do not want  to waste my space
> for archive.****
>
>  ****
>
> Thanks,****
>
>  ****
>
> Xia****
>
>  ****
>
>  ****
>
> ** **
>
> --****
>
> Arun C. Murthy****
>
> Hortonworks Inc.
> http://hortonworks.com/****
>
> ** **
>
+
Xia_Yang@... 2013-04-11, 18:10
+
Xia_Yang@... 2013-04-11, 20:52
+
Hemanth Yamijala 2013-04-12, 04:09
+
Xia_Yang@... 2013-04-16, 17:45
+
Hemanth Yamijala 2013-04-17, 04:34
+
Xia_Yang@... 2013-04-17, 18:19
+
Hemanth Yamijala 2013-04-18, 04:11
+
Xia_Yang@... 2013-04-19, 00:57
+
Hemanth Yamijala 2013-04-19, 03:54
+
Xia_Yang@... 2013-04-23, 00:38
+
bejoy.hadoop@... 2013-04-16, 18:05