Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> bucketing on a column with millions of unique IDs


Copy link to this message
-
Re: bucketing on a column with millions of unique IDs
Hi Li

The major consideration you should give is regarding the size of bucket. One bucket corresponds to a file in hdfs and you should ensure that every bucket is atleast a block size or in the worst case atleast majority of the buckets should be.

So based on the data size you should derive on this rather than the number of rows/records.

Regards
Bejoy KS

Sent from remote device, Please excuse typos

-----Original Message-----
From: Echo Li <[EMAIL PROTECTED]>
Date: Wed, 20 Feb 2013 16:19:43
To: <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: bucketing on a column with millions of unique IDs

Hi guys,

I plan to bucket a table by "userid" as I'm going to do intense calculation
using "group by userid". there are about 110 million rows, with 7 million
unique userid, so my question is what is a good number of buckets for this
scenario, and how to determine number of buckets?

Any input is apprecaited :)

Echo