Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> accumulo for a bi-map?


Copy link to this message
-
Re: accumulo for a bi-map?
I have implemented an approach like Dave Marion's, where on a match during
search I insert two rows:

Row****

Column Family****

Column Qualifier****

Value****

abcd****

ijkl****

90****

** **
ijkl****

abcd****

90****

** **

**

This works great for what I need to get, all abcd matches, all ijkl
matches, specifically abcd->ijkl or reversed. For threshold filtering, I'm
currently getting all of the results (from these cases) and then not
retaining items below my threshold. I've looked at some ways to use a scan
iterator to do this but I'm coming up short. Best idea I've had yet is to
extend the ColumnQualifierFilter to see if I can do a "greater than"
instead of an equals to accept or not. Any thoughts?
On Wed, Jul 17, 2013 at 10:26 AM, Marc Reichman <
[EMAIL PROTECTED]> wrote:

> Thank you all for your responses. Some follow-up thoughts/questions:
>
> The use cases I'm chasing right now for retrieval are shaping up to be:
> 1. Get one ABCD->IJKL match score
> 2. Get all ABCD->* match scores
> 3. Either of the above, only greater than a specified threshold.
>
> It's looking like the results may go into a different table than the
> original features, so I can work a little more flexibly.
>
> So far, Dave Marion's approach seems most closely suited to this, but in a
> different table I wouldn't get the features back if I just did a basic scan
> for the row key without other factors, which would satisfy use case #2. I
> can satisfy case #1 easily if I make the targets (IJKL) a qualifier and
> constrain by it on my scan as needed.
>
> For #3, I'm a bit confused at a best way to do this. A simple solution
> would be to just pull all the results from the #1/#2 cases and filter out
> undesirables in my client-side code. Assuming key:source, fam:target,
> col:score, is there some form of iterator or filter I could use to process
> the column names and throw out what I don't want with decent data locality
> for the processing?
>
> Would it make any major impact if the scores were not integers but
> doubles? I'm already anticipating having to parse doubles from the scores
> as-stored in byte[] string form, but I don't know if the performance impact
> would make any difference doing that locally after or in an iterator.
>
> I feel like this is close and I appreciate the guidance.
>
> Thanks,
> Marc
>
>
> On Tue, Jul 16, 2013 at 6:25 PM, Josh Elser <[EMAIL PROTECTED]> wrote:
>
>> Instead of keeping all match scores inside of one Value, have you
>> considered thinking about your data in term of edges?
>>
>> key:abcd->efgh score, value:88%
>> key:abcd->ijkl score, value:90%
>> key:efgh->abcd score, value:88%
>> key:ijkl->abcd score, value:90%
>>
>> If you do go the route of storing both directions in Accumulo, a
>> structure like this will likely be much easier to maintain, as you're not
>> trying to manage difficult aggregation rules for multiple updates to the
>> matches for a single record. Additionally, you should get really good
>> compression (and even better in 1.5) when you have large row prefixes (many
>> matches for abcd will equate to abcd being stored "once").
>>
>> You could also store all of the features for a record in a key which only
>> has the record in the row.
>>
>> key:abcd feature:foo1
>> key:abcd feature:foo2
>> etc.
>>
>> Also, I'd encourage you to try to upgrade to 1.5.0 if you can, but, if
>> not, definitely update to 1.4.3 as it fixes a fair number of bugs. It's as
>> simple as stopping Accumulo, and copying in the 1.4.3 Accumulo jar files to
>> $ACCUMULO_HOME/lib, and removing the 1.4.1 jars.
>>
>> (apparently Dave Marion and I think alike)
>>
>> - Josh
>>
>>
>> On 07/16/2013 05:28 PM, Marc Reichman wrote:
>>
>>> We are using accumulo as a mechanism to store feature data (binary
>>> byte[]) for some simple keys which are used for a search algorithm. We
>>> currently search by iterating over the feature space using
>>> AccumuloRowInputFormat. Results come out of a reducer into HDFS, currently