Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Zookeeper >> mail # user >> Re: Zookeeper on short lived VMs and ZOOKEEPER-107


+
Ted Dunning 2012-03-15, 18:50
+
=?KOI8-U?B?96bUwcymyiD0yc... 2012-03-15, 20:29
+
Ted Dunning 2012-03-16, 05:41
+
Christian Ziech 2012-03-14, 16:04
+
Christian Ziech 2012-03-14, 17:01
+
Alexander Shraer 2012-03-15, 06:46
+
Christian Ziech 2012-03-15, 09:50
+
Alexander Shraer 2012-03-15, 15:33
+
Alexander Shraer 2012-03-15, 22:54
+
Alexander Shraer 2012-03-16, 03:43
+
Christian Ziech 2012-03-16, 09:56
+
Ted Dunning 2012-03-16, 15:51
Copy link to this message
-
Re: Zookeeper on short lived VMs and ZOOKEEPER-107
I think this is why when you're doing rolling restarts /
reconfiguration you should never have two different servers that have
any chance of being up at the same time with the same id.
With 107 you'd have to remove the server and add a new server with
some different id (choosing the new id is left to the user).

In terms of support with 107 we need all the help we can get :)
Currently there are two parts of it in pretty good shape that I'm
hoping to integrate soon: 1355 and 1411.
Comments or testing of 1411 would be very helpful at this point. Also,
if you wish, you can check out the latest patch for 107 (that patch is
not going to be integrated - instead I'm trying to get it in piece by
piece, but still, you can try it and see if it works for you or if you
have comments. You can also help by writing tests for it).

Best Regards,
Alex

On Fri, Mar 16, 2012 at 2:56 AM, Christian Ziech
<[EMAIL PROTECTED]> wrote:
> Under normal circumstances the ability to detect failures correctly should
> be given. The scenario I'm aware of includes one zookeeper system would be
> taken down for a reason and then possibly just rebooted or even started from
> scratch elsewhere. In both cases however the new host would have the old dns
> name but most likely a different IP. But of course that only applies to us
> and possibly not to all of the users.
>
> When thinking about the scenario you described I understood where the
> problem lies. However wouldn't the same problem also be relevant the way
> zookeeper is implemented right now? Let me try to explain why (possibly I'm
> wrong here since I may miss some points on how zookeeper servers works
> internally - corrections are very welcome):
> - Same scenarios as you described - nodes A with host name a, B host name b
> and C with host name c
> - Also same as in your scenario C is due to some human error falsely
> detected as down. Hence C' is brought up and is assigned the same DNS name
> as C
> - Now rolling restarts are performed to bring in C'
> - A resolves c correctly to the new IP and connects to C' but B still
> resolves the host name c to the original address of C and hence does not
> connect (I think some DNS slowness is also required for your approach in
> order for the host name c being resolved to the original IP of C)
> - now the rest of your scenario happens: Update U is applied, C' gets slow,
> C recovers and A fails.
> Of course also this approach requires some DNS craziness but if I did not
> make a mistake in my thoughts it should still be possible.
>
> PS: Wouldn't your scenario not also invalidate the solution of the hbase
> guys using amazons elastic ips to solve the same problem (see
> https://issues.apache.org/jira/browse/HBASE-2327)?
> PS2: If the approach I had in mind is not valid, do you guys already have a
> plan for when 3.5.0 would be released or could you guys be supported in some
> way so that zookeeper-107 makes it sooner into a release?
>
> Am 16.03.2012 04:43, schrieb ext Alexander Shraer:
>
>> Actually its still not clear to me how you would enforce the 2x+1. In
>> Zookeeper we can guarantee liveness (progress) only when x+1 are connected
>> and up, however safety (correctness) is always guaranteed, even if 2 out of
>> 3 servers are temporarily down. Your design needs the 2x+1 for safety, which
>> I think is problematic unless you can accurately detect failures (synchrony)
>> and failures are permanent.
>>
>> Alex
>>
>>
>> On Mar 15, 2012, at 3:54 PM, Alexander Shraer<[EMAIL PROTECTED]>  wrote:
>>
>>> I think the concern is that the old VM can recover and try to
>>> reconnect. Theoretically you could even go back and forth between new
>>> and old VM. For example, suppose that you have servers
>>> A, B and C in the cluster, A is the leader. C is slow and "replaced"
>>> with C', then update U is acked by A and C', then A fails. In this
>>> situation you cannot have additional failures. But with the
>>> automatic replacement thing it can (theoretically) happen that C'
+
Christian Ziech 2012-03-19, 12:11
+
Benjamin Reed 2012-03-16, 18:15
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB