Any ideas on how limiting the number of times a worker is restarted can be achieved? It seems like this functionality is not provided by Storm, but hopefully I am just missing something.

From: [EMAIL PROTECTED] At: 07/09/18 21:39:20To:  [EMAIL PROTECTED]
Subject: Worker Process Failure Behavior

Looking at http://storm.apache.org/releases/current/Fault-tolerance.html, it states that "If the worker continuously fails on startup and is unable to heartbeat to Nimbus, Nimbus will reassign the worker to another machine." What happens if the Storm cluster has a single worker node? Will the worker process just be restarted indefinitely, or is there some way to configure a cut off of how many times a worker process can be restarted?
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB