Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> RE: How to configure mapreduce archive size?


Copy link to this message
-
RE: How to configure mapreduce archive size?
Hi Hemanth and Bejoy KS,

I have tried both mapred-site.xml and core-site.xml. They do not work. I set the value to 50K just for testing purpose, however the folder size already goes to 900M now. As in your email, "After they are done, the property will help cleanup the files due to the limit set. " How frequently the cleanup task will be triggered?

Regarding the job.xml, I cannot use JT web UI to find it. It seems when hadoop is packaged within Hbase, this is disabled. I am only use Hbase jobs. I was suggested by Hbase people to get help from Hadoop mailing list. I will contact them again.

Thanks,

Jane

From: Hemanth Yamijala [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 16, 2013 9:35 PM
To: [EMAIL PROTECTED]
Subject: Re: How to configure mapreduce archive size?

You can limit the size by setting local.cache.size in the mapred-site.xml (or core-site.xml if that works for you). I mistakenly mentioned mapred-default.xml in my last mail - apologies for that. However, please note that this does not prevent whatever is writing into the distributed cache from creating those files when they are required. After they are done, the property will help cleanup the files due to the limit set.

That's why I am more keen on finding what is using the files in the Distributed cache. It may be useful if you can ask on the HBase list as well if the APIs you are using are creating the files you mention (assuming you are only running HBase jobs on the cluster and nothing else)

Thanks
Hemanth

On Tue, Apr 16, 2013 at 11:15 PM, <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Hi Hemanth,

I did not explicitly using DistributedCache in my code. I did not use any command line arguments like -libjars neither.

Where can I find job.xml? I am using Hbase MapReduce API and not setting any job.xml.

The key point is I want to limit the size of /tmp/hadoop-root/mapred/local/archive. Could you help?

Thanks.

Xia

From: Hemanth Yamijala [mailto:[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>]
Sent: Thursday, April 11, 2013 9:09 PM

To: [EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>
Subject: Re: How to configure mapreduce archive size?

TableMapReduceUtil has APIs like addDependencyJars which will use DistributedCache. I don't think you are explicitly using that. Are you using any command line arguments like -libjars etc when you are launching the MapReduce job ? Alternatively you can check job.xml of the launched MR job to see if it has set properties having prefixes like mapred.cache. If nothing's set there, it would seem like some other process or user is adding jars to DistributedCache when using the cluster.

Thanks
hemanth

On Thu, Apr 11, 2013 at 11:40 PM, <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Hi Hemanth,

Attached is some sample folders within my /tmp/hadoop-root/mapred/local/archive. There are some jar and class files inside.

My application uses MapReduce job to do purge Hbase old data. I am using basic HBase MapReduce API to delete rows from Hbase table. I do not specify to use Distributed cache. Maybe HBase use it?

Some code here:

       Scan scan = new Scan();
       scan.setCaching(500);        // 1 is the default in Scan, which will be bad for MapReduce jobs
       scan.setCacheBlocks(false);  // don't set to true for MR jobs
       scan.setTimeRange(Long.MIN_VALUE, timestamp);
       // set other scan attrs
       // the purge start time
       Date date=new Date();
       TableMapReduceUtil.initTableMapperJob(
             tableName,        // input table
             scan,               // Scan instance to control CF and attribute selection
             MapperDelete.class,     // mapper class
             null,         // mapper output key
             null,  // mapper output value
             job);

       job.setOutputFormatClass(TableOutputFormat.class);
       job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, tableName);
       job.setNumReduceTasks(0);

       boolean b = job.waitForCompletion(true);

From: Hemanth Yamijala [mailto:[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>]
Sent: Thursday, April 11, 2013 12:29 AM

To: [EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>
Subject: Re: How to configure mapreduce archive size?

Could you paste the contents of the directory ? Not sure whether that will help, but just giving it a shot.

What application are you using ? Is it custom MapReduce jobs in which you use Distributed cache (I guess not) ?

Thanks
Hemanth

On Thu, Apr 11, 2013 at 3:34 AM, <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:
Hi Arun,

I stopped my application, then restarted my hbase (which include hadoop). After that I start my application. After one evening, my /tmp/hadoop-root/mapred/local/archive goes to more than 1G. It does not work.

Is this the right place to change the value?

"local.cache.size" in file core-default.xml, which is in hadoop-core-1.0.3.jar

Thanks,

Jane

From: Arun C Murthy [mailto:[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>]
Sent: Wednesday, April 10, 2013 2:45 PM

To: [EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>
Subject: Re: How to configure mapreduce archive size?

Ensure no jobs are running (cache limit is only for non-active cache files), check after a little while (takes sometime for the cleaner thread to kick in).

Arun

On Apr 11, 2013, at 2:29 AM, <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> <[EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]>> wrote:

Hi Hemanth,

For the hadoop 1.0.3, I can only find "local.cache.size" in file core-default.xml, which is in hadoop-core-1.0.3.jar. It is not in mapred-default.xml.

I updated the value in file default.xml and changed the value to 500000. This is just for my testing purpose. However, the folder /tmp/hadoop-root/mapred/local/archive already goes more than 1G now. Looks like it does not do the work. Could you advise if what I did is correct?

  <name>local.cache.size</name>
  <value>500000</value>

Thanks,