(bump) this is a good question.
im new to kerberos as well, and have been wondering how to prevent
scenarios such as this from happening.....
my thought is that since Kerberos iirc requires a ticket for each pair of
client + services working together ... maybe there is a chance that, if
*any* two nodes in a cluster havent been initialized with the right tickets
to talk together, then a possible error can happen during shuffle-sort b/c
so much distributed copying is going on ???
In any case, id love to know any good smoke tests for a large size
kerberized hadoop cluster .... that dont require running a mapreduce job.
On Sat, Apr 19, 2014 at 11:11 PM, Mike <[EMAIL PROTECTED]> wrote: