roshanp@... 2013-03-28, 15:00
Keith Turner 2013-03-28, 15:55
roshanp@... 2013-03-28, 16:15
Keith Turner 2013-03-28, 17:15
roshanp@... 2013-03-28, 18:00
Keith Turner 2013-03-28, 18:32
-Re: Accumulo Utilities
roshanp@... 2013-03-28, 18:46
So there are two use cases
1. We have a lot of users that are querying data, and each query opens a new BatchScanner. As we would scale to more users simultaneously, we would have a lot of contention for threads if we keep a constant number of threads per BatchScanner.
2. In the Rya work, we do interesting joins, and as we have more layers to the joins we create an order of magnitude of batch scanners. Smart on my part… Trying to control the number of threads created here as well.
Btw, the explanation of how the BatchScanner makes perfect sense. Really it is a lot smarter on how to query based on tablets at the tablet servers. I have to check out the code in more detail, especially the part of the tablets splitting while we scan. That sounds interesting.
On Mar 28, 2013, at 2:32 PM, Keith Turner <[EMAIL PROTECTED]> wrote:
> On Thu, Mar 28, 2013 at 2:00 PM, <[EMAIL PROTECTED]> wrote:
>> Yeah, that is why in the ThreadPoolConnector, I did not want to block ever. If the pool is exhausted, then just make a different kind of BatchScanner, that doesn't spawn new threads. Once the BatchScanner is closed, then release the threads. I can probably make a ThreadPool implementation that does that, just returns only 1 thread if the pool is exhausted and never block.
>> I did not want to spin up a new thread at all once the pool is exhausted, but from what you are saying it is ok to really have a new thread. Instead of increasing the threads used by 10+ with each batch scanner, I would just be increasing by 1, that isn't so bad.
> I am curious about the problem you are trying to solve. Do you have
> too many active threads and thats causing thrashing? Or do you end up
> with a lot of inactive threads eating up memory?
>> For binning of ranges, would it make more sense to add a server side iterator to make sure the gaps do not come back. So it might go like this:
>> ranges = 1-2, 5-6, 7-8
>> Tablet servers Ranges: T1: 1-4, T2: 5-10
>> The ranges actually searched will be T1: 1-2, and T2: 5-8 (with a server side iterator removing the ranges not included)
> It would probably be T1:1-2 and T2:5-6,7-8. I assume T1 and T2
> represent tablets, and not tablet servers?
> Adding a server side iterator to a scanner that accepts a list of
> ranges would make it more like the batch scanner. One difference is
> that you would need to do a scan per tablet (which is certainly better
> than a scan per range), passing each tablet the list of ranges that
> pertain to it. The batch scanner sends all ranges for all tablets in
> one shot to a tablet server, so the batch scanner conceptually does a
> scan per tablet server(better than a scan per tablet). The scanner
> will never operate on more than one tablet a time. You would need
> to properly handle tablets splitting while you are scanning.
> The batch scanner also tracks which ranges are finished as it gets
> results backs. This keeps it from having to redo work in the case
> where a tablet moves (because of migration, split, or tablet server
>> What about the BatchScanner, doesn't it also binRanges, and then tell each tablet server that it only cares about a subset of ranges. That way you only have your number of ranges maxed at the number of tablet servers that have the ranges you asked for. Then each tablet server knows exactly which ranges to return?
> I think I answered this question above.
>> Feel free to ignore the myriad of questions, it is interesting learning the inner workings of the BatchScanner and Scanner.
>> On Mar 28, 2013, at 1:15 PM, Keith Turner <[EMAIL PROTECTED]> wrote:
>>> On Thu, Mar 28, 2013 at 12:15 PM, <[EMAIL PROTECTED]> wrote:
>>>> Thanks! I like the idea of sending my own thread pool to the batch scanner, that would definitely be the better solution.
>>> Would you like to open a ticket about this issue?
>>> I just remembered, there is an issues w/ this approach to be aware of