Skip to Main Content
Failures in complex systems controlled by human operators can be difficult to anticipate because of unexpected interactions between the elements that compose the system, including human-automation interaction (HAI). HAI analyses would benefit from techniques that support investigating the possible combinations of system conditions and HAIs that might result in failures. Formal verification is a powerful technique used to mathematically prove that an appropriately scaled model of a system does or does not exhibit desirable properties. This paper discusses how formal verification has been used to evaluate HAI. It has been used to evaluate human-automation interfaces for usability properties and to find potential mode confusion. It has also been used to evaluate system safety properties in light of formally modeled task analytic human behavior. While capable of providing insights into problems associated with HAI, formal verification does not scale as well as other techniques such as simulation. However, advances in formal verification continue to address this problem, and approaches that allow it to complement more traditional analysis methods can potentially avoid this limitation.