Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase >> mail # user >> secondary index feature

Copy link to this message
Re: secondary index feature
The work that James is referencing grew out of the discussions Lars and I
had (which lead to those blog posts). The solution we implement is designed
to be generic, as James mentioned above, but was written with all the hooks
necessary for Phoenix to do some really fast updates (or skipping updates
in the case where there is no change).

You should be able to plug in your own simple index builder (there is
an example
in the phoenix codebase<https://github.com/forcedotcom/phoenix/tree/master/src/main/java/com/salesforce/hbase/index/covered/example>)
to basic solution which supports the same transactional guarantees as HBase
(per row) + data guarantees across the index rows. There are more details
in the presentations James linked.

I'd love you see if your implementation can fit into the framework we wrote
- we would be happy to work to see if it needs some more hooks or
modifications - I have a feeling this is pretty much what you guys will need

On Mon, Dec 23, 2013 at 10:01 AM, James Taylor <[EMAIL PROTECTED]>wrote:

> Henning,
> Jesse Yates wrote the back-end of our global secondary indexing system in
> Phoenix. He designed it as a separate, pluggable module with no Phoenix
> dependencies. Here's an overview of the feature:
> https://github.com/forcedotcom/phoenix/wiki/Secondary-Indexing. The
> section that discusses the data guarantees and failure management might be
> of interest to you:
> https://github.com/forcedotcom/phoenix/wiki/Secondary-Indexing#data-guarantees-and-failure-management
> This presentation also gives a good overview of the pluggability of his
> implementation:
> http://files.meetup.com/1350427/PhoenixIndexing-SF-HUG_09-26-13.pptx
> Thanks,
> James
> On Mon, Dec 23, 2013 at 3:47 AM, Henning Blohm <[EMAIL PROTECTED]>wrote:
>> Lars, that is exactly why I am hesitant to use one the core level generic
>> approaches (apart from having difficulties to identify the still active
>> projects): I have doubts I can sufficiently explain to myself when and
>> where they fail.
>> With "toolbox approach" I meant to say that turning entity data into
>> index data is not done generically but rather involving domain specific
>> application code that
>> - indicates what makes an index key given an entity
>> - indicates whether an index entry is still valid given an entity
>> That code is also used during the index rebuild and trimming (an M/R Job)
>> So validating whether an index entry is valid means to load the entity
>> pointed to and - before considering it a valid result - validating whether
>> values of the entity still match with the index.
>> The entity is written last, hence when the client dies halfway through
>> the update you may get stale index entries but nothing else should break.
>> For scanning along the index, we are using a chunk iterator that is, we
>> read n index entries ahead and then do point look ups for the entities. How
>> would you avoid point-gets when scanning via an index (as most likely,
>> entities are ordered independently from the index - hence the index)?
>> Something really important to note is that there is no intention to build
>> a completely generic solution, in particular not (this time - unlike the
>> other post of mine you responded to) taking row versioning into account.
>> Instead, row time stamps are used to delete stale entries (old entries
>> after an index rebuild).
>> Thanks a lot for your blog pointers. Haven't had time to study in depth
>> but at first glance there is lot of overlap of what you are proposing and
>> what I ended up doing considering the first post.
>> On the second post: Indeed I have not worried too much about
>> transactional isolation of updates. If index update and entity update use
>> the same HBase time stamp, the result should at least be consistent, right?
>> Btw. in no way am I claiming originality of my thoughts - in particular I
>> read http://jyates.github.io/2012/07/09/consistent-enough-