Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming | IEEE Conference Publication | IEEE Xplore

Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming


Abstract:

Quantifying the robustness of neural networks or verifying their safety properties against input uncertainties or adversarial attacks have become an important research ar...Show More

Abstract:

Quantifying the robustness of neural networks or verifying their safety properties against input uncertainties or adversarial attacks have become an important research area in learning-enabled systems. Most results concentrate around the worst-case scenario where the input of the neural network is perturbed within a norm-bounded uncertainty set. In this paper, we consider a probabilistic setting in which the uncertainty is random with known first two moments. In this context, we discuss two relevant problems: (i) probabilistic safety verification, in which the goal is to find an upper bound on the probability of violating a safety specification; and (ii) confidence ellipsoid estimation, in which given a confidence ellipsoid for the input of the neural network, our goal is to compute a confidence ellipsoid for the output. Due to the presence of nonlinear activation functions, these two problems are very difficult to solve exactly. To simplify the analysis, our main idea is to abstract the nonlinear activation functions by a combination of affine and quadratic constraints they impose on their input-output pairs. We then show that the safety of the abstracted network, which is sufficient for the safety of the original network, can be analyzed using semidefinite programming. We illustrate the performance of our approach with numerical experiments.
Date of Conference: 11-13 December 2019
Date Added to IEEE Xplore: 12 March 2020
ISBN Information:

ISSN Information:

Conference Location: Nice, France
References is not available for this document.

I. Introduction

Neural Networks (NN) have been very successful in various applications such as end-to-end learning for self-driving cars [1], learning-based controllers in robotics [2], speech recognition, and image classifiers. Their vulnerability to input uncertainties and adversarial attacks, however, refutes the deployment of neural networks in safety critical applications. In the context of image classification, for example, it has been shown in several works [3]–[5] that even adding an imperceptible noise to the input of neural network-based classifiers can completely change their decision. In this context, verification refers to the process of checking whether the output of a trained NN satisfies certain desirable properties when its input is perturbed within an uncertainty model. More precisely, we would like to verify whether the neural network’s prediction remains the same in a neighborhood of a test point xƒ. This neighborhood can represent, for example, the set of input examples that can be crafted by an adversary.

References is not available for this document.

Contact IEEE to Subscribe

References

References is not available for this document.