Loading [MathJax]/extensions/MathMenu.js
Investigating Lipschitz Constants in Neural Ensemble Models to Improve Adversarial Robustness | IEEE Conference Publication | IEEE Xplore

Investigating Lipschitz Constants in Neural Ensemble Models to Improve Adversarial Robustness


Abstract:

This work investigates the relationship between adversarial robustness and the local Lipschitz constant in ensemble neural network frameworks, namely bagging and stacking...Show More

Abstract:

This work investigates the relationship between adversarial robustness and the local Lipschitz constant in ensemble neural network frameworks, namely bagging and stacking. Capitalising on this, we introduce an ensemble neural network design that improves both accuracy and adversarial resilience. We theoretically obtain the local Lipschitz constants for both ensembles, offering insights into their susceptibility to adversarial attacks and identifying architectures optimal for adversarial defense. Notably, our approach negates the need for specific adversarial attack and accommodates any number of pre-trained networks for an ensemble architecture. Evaluations on the MNIST and CIFAR-10 datasets against white-box attacks, specifically FGSM and PGD, show our approach is adversarially robust compared to standalone networks and vanilla ensemble architectures.
Date of Conference: 22-24 November 2023
Date Added to IEEE Xplore: 08 January 2024
ISBN Information:
Conference Location: Bologna, Italy

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.