I. Introduction
Neural Networks (NN) have been very successful in various applications such as end-to-end learning for self-driving cars [1], learning-based controllers in robotics [2], speech recognition, and image classifiers. Their vulnerability to input uncertainties and adversarial attacks, however, refutes the deployment of neural networks in safety critical applications. In the context of image classification, for example, it has been shown in several works [3]–[5] that even adding an imperceptible noise to the input of neural network-based classifiers can completely change their decision. In this context, verification refers to the process of checking whether the output of a trained NN satisfies certain desirable properties when its input is perturbed within an uncertainty model. More precisely, we would like to verify whether the neural network’s prediction remains the same in a neighborhood of a test point xƒ. This neighborhood can represent, for example, the set of input examples that can be crafted by an adversary.