Skip to Main Content
Summary form only given. We argue that because of three major "tyrants," namely, complexity, time, and unpredictability, dependability is and will continue to be a permanent challenge. With current complexity levels and necessity of dealing with time, in addition to classical synthesis and analysis methods, we need to turn to empirical data-driven approaches which require runtime monitoring, online measurement, online analysis, diagnosis, failure prediction and decision making to support recovery and nonstop computing and communication. Also, to better understanding of system behavior in presence of faults, fault injection methods will become equally relevant as well. We need to use the natural science approach, similar to the one in physics or biology, which is based on observations and measurements in order to confirm certain hypotheses, create relevant behavioral models and, ideally, derive laws or principles that relate the observed variables to given inputs. To illustrate such approaches two case studies are presented: In the first case study, we address the problem of proactive fault management by showing how runtime monitoring, variable selection and model re-evaluation lead to effective failure prediction. The second case study illustrates how by observation and measurement a generator for realistic topologies of ad hoc networks has been developed. A number of topology generation algorithms for simulation of wireless multihop networks have been proposed but as shown in literature most of the existing node placement models create topologies that are considerably different from topologies of real networks. In order to address this issue we have developed a novel node placement algorithm - NPART that creates topologies that resemble the real topologies and helps in resilience analysis. Finally, we conclude that models derived from monitoring and measurement will gain on significance and impact and list the major challenges for empirical research on dependabilit- y.