Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Accumulo >> mail # user >> Doc-Partitioned Index with Wildcards


Copy link to this message
-
Doc-Partitioned Index with Wildcards
I'm trying to set up a document partitioned index that can handle a ranges of terms or wildcards for queries.

So, if instead of querying "the" AND "green" AND "goblin", it could handle "the" AND "green" AND "go*" (which would also return "goddess", for instance). Or a search that used "the" AND "d"-"f" AND "goblin", handling all values between "d" and "f".

Using a typical document-partitioned index, I'm guessing that you might first resolve the wildcard into a list of terms, and then do a query in the normal fashion. However, this seems rather inefficient. Is there a separate data structure that would be recommended to handle this sort of additional functionality?

Thanks,
David
+
Christopher 2013-01-22, 20:40
+
John Vines 2013-01-22, 20:43
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB