Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Partition performance

Copy link to this message
Re: Partition performance
Can you tell how many map tasks are there in each scenario?

If my assumption is correct, you should have 336 in the first case and 14
in second case.
It looks like it is combing all small files in a folder and running as one
map task for all 24 files in a folder, whereas it is running a separate
task in these files are there in different partitions (folders).

You can try to reuse the JVM and see if the response time is similar.

Can you please try the following and let us know how long each strategy

hive> set mapred.job.reuse.jvm.num.tasks = 24;

Run your  query that has more partitions and see if the response time is

On Fri, Apr 5, 2013 at 11:36 AM, Ian <[EMAIL PROTECTED]> wrote:

> Thanks. This is just a test from my local box. So each file is only 1kb. I
> shared the query plans of these two tests at:
> http://codetidy.com/paste/raw/5198
> http://codetidy.com/paste/raw/5199
> Also in the Hadoop log, there is this line for each partition:
> org.apache.hadoop.hive.ql.exec.MapOperator: Adding alias test1 to work
> list for file hdfs://localhost:8020/test1/2011/02/01/01
> Does that mean each partition will become a map task?
> I'm still new in Hive, just wondering what are the common strategy for
> partitioning the hourly logs? I know we shouldn't have too many partitions
> but I'm wondering what's the reason behind it? If I run this on a real
> cluster, maybe it won't perform so differently?
> Thanks.
>   *From:* Dean Wampler <[EMAIL PROTECTED]>
> *Sent:* Thursday, April 4, 2013 4:28 PM
> *Subject:* Re: Partition performance
> Also, how big are the files in each directory? Are they roughly the size
> of one HDFS block or a multiple. Lots of small files will mean lots of
> mapper tasks will little to do.
> You can also compare the job tracker console output for each job. I bet
> the slow one has a lot of very short map and reduce tasks, while the faster
> one has fewer tasks that run longer. A rule of thumb is that any one task
> should take 20 seconds or more to amortize over the few seconds spent in
> start up per task.
> In other words, if you think about what's happening at the HDFS and MR
> level, you can learn to predict how fast or slow things will run. Learning
> to read the output of EXPLAIN or EXPLAIN EXTENDED helps with this.
> dean
> On Thu, Apr 4, 2013 at 6:25 PM, Owen O'Malley <[EMAIL PROTECTED]> wrote:
> See slide #9 from my Optimizing Hive Queries talk
> http://www.slideshare.net/oom65/optimize-hivequeriespptx . Certainly, we
> will improve it, but for now you are much better off with 1,000 partitions
> than 10,000.
> -- Owen
> On Thu, Apr 4, 2013 at 4:21 PM, Ramki Palle <[EMAIL PROTECTED]> wrote:
> Is it possible for you to send the explain plan of these two queries?
> Regards,
> Ramki.
> On Thu, Apr 4, 2013 at 4:06 PM, Sanjay Subramanian <
>  The slow down is most possibly due to large number of partitions.
> I believe the Hive book authors tell us to be cautious with large number
> of partitions :-)  and I abide by that.
>  Users
> Please add your points of view and experiences
>  Thanks
> sanjay
>   From: Ian <[EMAIL PROTECTED]>
> Date: Thursday, April 4, 2013 4:01 PM
> Subject: Partition performance
>   Hi,
> I created 3 years of hourly log files (totally 26280 files), and use
> External Table with partition to query. I tried two partition methods.
> 1). Log files are stored as /test1/2013/04/02/16/000000_0 (A directory per
> hour). Use date and hour as partition keys. Add 3 years of directories to
> the table partitions. So there are 26280 partitions.
>         CREATE EXTERNAL TABLE test1 (logline string) PARTITIONED BY (dt
> string, hr int);
>         ALTER TABLE test1 ADD PARTITION (dt='2013-04-02', hr=16) LOCATION