Mark Kerzner 2012-10-10, 02:47
Ted Dunning 2012-10-10, 14:58
Lance Norskog 2012-10-11, 02:15
On Wed, Oct 10, 2012 at 10:15 PM, Lance Norskog <[EMAIL PROTECTED]> wrote:
> In the LucidWorks Big Data product, we handle this with a reducer that sends documents to a SolrCloud cluster. This way the index files are not managed by Hadoop.
I'm curious if you've gotten that to work with a decent-sized (e.g. >
250 node) cluster? Even a trivial cluster seems to crush SolrCloud
from a few months ago at least...
> ----- Original Message -----
> | From: "Ted Dunning" <[EMAIL PROTECTED]>
> | To: [EMAIL PROTECTED]
> | Cc: "Hadoop User" <[EMAIL PROTECTED]>
> | Sent: Wednesday, October 10, 2012 7:58:57 AM
> | Subject: Re: Hadoop/Lucene + Solr architecture suggestions?
> | I prefer to create indexes in the reducer personally.
> | Also you can avoid the copies if you use an advanced hadoop-derived
> | distro. Email me off list for details.
> | Sent from my iPhone
> | On Oct 9, 2012, at 7:47 PM, Mark Kerzner <[EMAIL PROTECTED]>
> | wrote:
> | > Hi,
> | >
> | > if I create a Lucene index in each mapper, locally, then copy them
> | > to under /jobid/mapid1, /jodid/mapid2, and then in the reducers
> | > copy them to some Solr machine (perhaps even merging), does such
> | > architecture makes sense, to create a searchable index with
> | > Hadoop?
> | >
> | > Are there links for similar architectures and questions?
> | >
> | > Thank you. Sincerely,
> | > Mark
M. C. Srivas 2012-10-11, 05:04
JAY 2012-10-11, 05:38
Ted Dunning 2012-10-11, 06:13
Lance Norskog 2012-10-12, 05:20
Ted Dunning 2012-10-12, 05:23
Mark Kerzner 2012-10-11, 02:26
Ivan Frain 2012-10-10, 05:20