Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Zookeeper >> mail # user >> Serious problem processing hearbeat on login stampede


Copy link to this message
-
Re: Serious problem processing hearbeat on login stampede
when you file the jira can you also note the logging level you are using?

thanx
ben

2011/4/14 Chang Song <[EMAIL PROTECTED]>:
>
> Yes, Ben.
>
> If you read my emails carefully, I already said it is not heartbeat,
> it is session establishment / closing gets stamped.
> Since all the requests' response gets delayed, heartbeats are delayed
> as well.
>
>
> You need to understand that most app can tolerate delay in connect/close,
> but we cannot tolerate ping delay since we are using ZK heartbeat TO
> for sole failure detection.
> We use 15 seconds (5 sec for each ensemble)
> for session timeout, important server will drop out of the clusters even
> if the server is not malfunctioning, in some cases, it wreaks havoc on certain
> services.
>
>
> 1. 3.3.3 (latest)
>
> 2. We have a boot disk and usr disk.
>    But as I said, disk I/O is not an issue that's causing 8 second delay.
>
> My team will file JIRA today, we'll have to discuss on JIRA ;)
>
> Thank you.
>
> Chang
>
>
>
>
> 2011. 4. 15., 오전 2:59, Benjamin Reed 작성:
>
>> chang,
>>
>> if the problem is on client startup, then it isn't the heartbeat
>> stamped, it is session establishment. the heartbeats are very light
>> weight, so i can't imagine them causing any issues.
>>
>> the two key issues we need to know are: 1) the version of the server
>> you are running, and 2) if you are using a dedicated device for the
>> transaction log.
>>
>> ben
>>
>> 2011/4/14 Patrick Hunt <[EMAIL PROTECTED]>:
>>> 2011/4/14 Chang Song <[EMAIL PROTECTED]>:
>>>>> 2) regarding IO, if you run 'iostat -x 2' on the zk servers while your
>>>>> issue is happening, what's the %util of the disk? what's the iowait
>>>>> look like?
>>>>>
>>>>
>>>> Again, no I/O at all.   0%
>>>>
>>>
>>> This is simply not possible.
>>>
>>> Sessions are persistent. Each time a session is created, and each time
>>> it is closed, a transaction is written by the zk server to the data
>>> directory. Additionally log4j based logs are also being streamed to
>>> the disk. Each of these activities will cause disk IO that will show
>>> up on iostat.
>>>
>>>> Patrick. They are not continuously login/logout.
>>>> Maybe a couple of times a week. and before they push new feature.
>>>> When this happens, clients in group A drops out of clusters, which causes
>>>> problem to other unrelated services.
>>>>
>>>
>>> Ok, good to know.
>>>
>>>>
>>>> It is not about use case, because ZK clients simply tried to connect to
>>>> ZK ensemble. No use case applies. Just many clients login at the
>>>> same time or expires at the same time or close session at the same time.
>>>>
>>>
>>> As I mentioned, I've seen cluster sizes of 10,000 clients (10x what
>>> you report) that didn't have this issue. While bugs might be lurking,
>>> I've also worked with many teams deploying clusters (probably close to
>>> 100 by now), some of which had problems, the suggestions I'm making to
>>> you are based on that experience.
>>>
>>>> Heartbeats should be handled in an isolated queue and a
>>>> dedicated thread.  I don't think we need strict ordering keeping
>>>> of heartbeats, do we?
>>>
>>> ZK is purposely architected this way, it is not a mistake/bug. It is a
>>> falicy for a highly available service to respond quickly to a
>>> heartbeat when it cannot service regular requests in a timely fashion.
>>> This is one of the main reasons why heartbeats are handled in this
>>> way.
>>>
>>> Patrick
>>>
>>>>> Patrick
>>>>>
>>>>>> It's about CommitProcessor thread queueing (in leader).
>>>>>> QueuedRequests goes up to 800, so does commitedRequests and
>>>>>> PendingRequestElapsedTime. PendingRequestElapsedTime
>>>>>> goes up to 8.8 seconds during this flood.
>>>>>>
>>>>>> To exactly reproduce this scenario, easiest way is to
>>>>>>
>>>>>> - suspend All JVM client with debugger
>>>>>> - Cause all client JVM OOME to create heap dump
>>>>>>
>>>>>> in group B. All clients in group A will not be able to receive
>>>>>> ping response in 5 seconds.
>>>>>>
>>>>>> We need to fix this as soon as possible.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB