One underlying issue is that you would like your tool to be able to detect
which dataset is the largest and how large is it because with this
information different strategies can be chosen. This implies that somehow
your tool needs to create/keep/update statistics about your datasets. And
that's clearly something which is relevant for an external tool (like hive
or pig) but it might not make sense to build that into the core
mapred/mapreduce. That would increase coupling for something which is not
necessarily relevant for the core of the platform.
I know about about Hive. And you could be interested in reading more about
Statistics such as the number of rows of a table or partition and the
> histograms of a particular interesting column are important in many ways.
> One of the key use cases of statistics is query optimization. Statistics
> serve as the input to the cost functions of the optimizer so that it can
> compare different plans and choose among them. Statistics may sometimes
> meet the purpose of the users' queries. Users can quickly get the answers
> for some of their queries by only querying stored statistics rather than
> firing long-running execution plans. Some examples are getting the quantile
> of the users' age distribution, the top 10 apps that are used by people,
> and the number of distinct sessions.
I don't know if Pig has something similar.
On Thu, Oct 25, 2012 at 7:49 AM, Harsh J <[EMAIL PROTECTED]> wrote:
> Hi Sigurd,
> From what I've generally noticed, the client-end frameworks (Hive,
> Pig, etc.) have gotten much more cleverness and efficiency packed in
> their join parts than the MR join package which probably exists to
> serve as an example or utility today more than anything else (but
> works well for what it does).
> Per the code in the join package, there are no such estimates made
> today. There is zero use of DistributedCache - the only decisions are
> made based on the expression (i.e. to select which form of joining
> record reader to use).
> Enhancements to this may be accepted though, so feel free to file some
> JIRAs if you have something to suggest/contribute. Hopefully one day
> we could have a unified library between client-end tools for common
> use-cases such as joins, etc. over MR, but there isn't such a thing
> right now (AFAIK).
> On Tue, Oct 23, 2012 at 2:52 PM, Sigurd Spieckermann
> <[EMAIL PROTECTED]> wrote:
> > Interesting to know that Hive and Pig are doing something in this
> > I'm dealing with the Hadoop join-package which doesn't use
> > though but it rather pulls the other partition over the network before
> > launching the map task. This is under the assumption that both partitions
> > are too big to load into DC or it's just undesirable to use DC. Is there
> > similar mechanism implemented in the join-package that considers the
> size of
> > the two partitions to be joined trying to execute the map task on the
> > datanode that holds the bigger partition?
> > 2012/10/23 Bejoy KS <[EMAIL PROTECTED]>
> >> Hi Sigurd
> >> Mapside joins are efficiently implemented in Hive and Pig. I'm talking
> >> terms of how mapside joins are implemented in hive.
> >> In map side join, the smaller data set is first loaded into
> >> DistributedCache. The larger dataset is streamed as usual and the
> >> dataset in memory. For every record in larger data set the look up is
> >> in memory on the smaller set and there by joins are done.
> >> In later versions of hive the hive framework itself intelligently
> >> determines the smaller data set. In older versions you can specify the
> >> smaller data set using some hints in query.
> >> Regards
> >> Bejoy KS
> >> Sent from handheld, please excuse typos.
> >> -----Original Message-----
> >> From: Sigurd Spieckermann <[EMAIL PROTECTED]