roshanp@... 2013-03-28, 15:00
Keith Turner 2013-03-28, 15:55
roshanp@... 2013-03-28, 16:15
Keith Turner 2013-03-28, 17:15
roshanp@... 2013-03-28, 18:00
On Thu, Mar 28, 2013 at 2:00 PM, <[EMAIL PROTECTED]> wrote:
> Yeah, that is why in the ThreadPoolConnector, I did not want to block ever. If the pool is exhausted, then just make a different kind of BatchScanner, that doesn't spawn new threads. Once the BatchScanner is closed, then release the threads. I can probably make a ThreadPool implementation that does that, just returns only 1 thread if the pool is exhausted and never block.
> I did not want to spin up a new thread at all once the pool is exhausted, but from what you are saying it is ok to really have a new thread. Instead of increasing the threads used by 10+ with each batch scanner, I would just be increasing by 1, that isn't so bad.
I am curious about the problem you are trying to solve. Do you have
too many active threads and thats causing thrashing? Or do you end up
with a lot of inactive threads eating up memory?
> For binning of ranges, would it make more sense to add a server side iterator to make sure the gaps do not come back. So it might go like this:
> ranges = 1-2, 5-6, 7-8
> Tablet servers Ranges: T1: 1-4, T2: 5-10
> The ranges actually searched will be T1: 1-2, and T2: 5-8 (with a server side iterator removing the ranges not included)
It would probably be T1:1-2 and T2:5-6,7-8. I assume T1 and T2
represent tablets, and not tablet servers?
Adding a server side iterator to a scanner that accepts a list of
ranges would make it more like the batch scanner. One difference is
that you would need to do a scan per tablet (which is certainly better
than a scan per range), passing each tablet the list of ranges that
pertain to it. The batch scanner sends all ranges for all tablets in
one shot to a tablet server, so the batch scanner conceptually does a
scan per tablet server(better than a scan per tablet). The scanner
will never operate on more than one tablet a time. You would need
to properly handle tablets splitting while you are scanning.
The batch scanner also tracks which ranges are finished as it gets
results backs. This keeps it from having to redo work in the case
where a tablet moves (because of migration, split, or tablet server
> What about the BatchScanner, doesn't it also binRanges, and then tell each tablet server that it only cares about a subset of ranges. That way you only have your number of ranges maxed at the number of tablet servers that have the ranges you asked for. Then each tablet server knows exactly which ranges to return?
I think I answered this question above.
> Feel free to ignore the myriad of questions, it is interesting learning the inner workings of the BatchScanner and Scanner.
> On Mar 28, 2013, at 1:15 PM, Keith Turner <[EMAIL PROTECTED]> wrote:
>> On Thu, Mar 28, 2013 at 12:15 PM, <[EMAIL PROTECTED]> wrote:
>>> Thanks! I like the idea of sending my own thread pool to the batch scanner, that would definitely be the better solution.
>> Would you like to open a ticket about this issue?
>> I just remembered, there is an issues w/ this approach to be aware of
>> . I have seen this when multiple threads share a batch scanner (more
>> in this below). Consider the following situation.
>> 1. Thread A gives a lot of work to BatchScanner1 using Threadpool1,
>> creating BatchScannerIterator1
>> 2. BatchScannerIterator1's internal queue fills up as result of work
>> given by Thread A
>> 3. All threads in ThreadPool1 block trying to add to
>> BatchScannerIterator1 queue
>> 4. Thread B gives a lot of work to BatchScanner2 using Threadpool1,
>> creating BatchScannerIterator2
>> 5. Thread B attempts to iterate over BatchScannerIterator2, but
>> blocks forever because no threads service it
>> This problem occurs because Thread A never reads from BatchScannerIterator1
>> In the current code, multiple threads can use a BatchScanner. You
>> just need to make configuring the BatchScanner and getting an iterator
>> an atomic operation. When an iterator is created by a batch scanner,
roshanp@... 2013-03-28, 18:46