Skip to main content

Rigorous Performance Evaluation of Self-stabilization Using Probabilistic Model Checking

Publication Type
Year of Publication
2013
Conference/Journal Name
IEEE Symposium on Reliable Distributed Systems (SRDS)
Publisher
IEEE
Abstract
We propose a new metric for effectively and accurately evaluating the performance of self-stabilizing algorithms. Self-stabilization is a versatile category of fault-tolerance that guarantees system recovery to normal behavior within a finite number of steps, when the state of the system is perturbed by transient faults (or equally, the initial state of the system can be some arbitrary state). The performance of self-stabilizing algorithms is conventionally characterized in the literature by asymptotic computation complexity. We argue that such
characterization of performance is too abstract and does not reflect accurately the realities of deploying a distributed algorithm in practice. Our new metric for characterizing the performance of self-stabilizing algorithms is the expected mean value of recovery time. Our metric has several crucial features. Firstly, it encodes accurate average case speed of recovery. Secondly, we show that our evaluation method can effectively incorporate several other parameters that are of importance in practice and have no place in asymptotic computation complexity. Examples include the type of distributed scheduler, likelihood of occurrence of faults, the impact of faults on speed of recovery, and network topology. We utilize a deep analysis technique, namely, probabilistic model checking to rigorously
compute our proposed metric. All our claims are backed by detailed case studies and experiments.