Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper >> mail # dev >> RFC: Behavior of QuotaExceededException


Copy link to this message
-
Re: RFC: Behavior of QuotaExceededException
On 2/27/13 12:10 AM, "Flavio Junqueira" <[EMAIL PROTECTED]> wrote:
>It wouldn't be very nice to allow holes in the sequence of operations of
>a client, it would violate session semantics. I'm also wondering about a
>couple of things:
>
>- What does QuotaExceedException convey to the application? That the
>application client won't ever be able to send operations again with that
>session? That it won't be able to submit new operations for up to x
>amount of time, where x is computed somehow? Expiring the session will
>have the side-effect that all the ephemeral nodes will be gone, I'm not
>sure that's desirable, but as a punishment it might work out fine. ;-)

My initial plan is support 4 types of hard limits ( node count, used
bytes, requests/sec and update bytes/sec).  For the first 2 types of
limits, it is likely that client won't be able to complete any operation
after quota is exceeded. For the last two, after some amount of time, the
client should be able to make a successful request.

>- Have you consider limiting the rate of client operations instead of
>failing operations? Shaping the traffic of operations of a client might
>be way nicer from the client perspective, but perhaps a bit harder to
>implement.

We considered that as well, I already prototyped this feature a while
back. The main problem that I saw is that the network layer (eg. NIO
subsystem) only know about request size/rate ,client's ip/port and
sessionId. So its ability to do throttling is limited. Additionally, for a
client with a low session timeout, it will eventually timeout and
reconnect with other server (or create a new session) which will allow it
make a successful request on the other server until it exceeds usage
threshold again.  
Thanks for your response. I think I will go with session expire route.
>
>-Flavio
>
>On Feb 27, 2013, at 1:41 AM, Thawan Kooburat <[EMAIL PROTECTED]> wrote:
>
>> Hi,
>> I am currently working on ZOOKEEPER-1383. One of the main feature
>>introduced in this change is to allow ZooKeeper to enforce hard limit
>>(e.g.  Txn per sec) per folder .
>>
>> With hard limit, we need to introduce a new exception/error code
>>(QuotaExceeded) for ZooKeeper operations that modify the DataTree.  If a
>>client get this error, it means that the particular operation is
>>definitely failed.
>>
>> From our internal discussion, this may make it harder for a user to
>>write an application.  The thought is that this can possibly introduce a
>>hole in sequence of operations that the client application performs,
>>since some operation may success but some may be not.  One of the idea
>>is to also  trigger session expire (or at least trigger disconnect) on
>>the server-side in addition to QuotaExceed error.  This will cause all
>>subsequent operations from that client to fail and allow the application
>>to use existing error handling logic to recover from QuotaExceed.
>>Typically, the application that exceeded the quota is already doing
>>something wrong from administrator's perspective, but we also want to
>>fail gracefully and able to recover when the problem is fixed or quota
>>is increased.
>>
>> Let me know if you have any suggestion.
>>
>> --
>> Thawan Kooburat
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB