On the Interpretable Adversarial Sensitivity of Iterative Optimizers | IEEE Conference Publication | IEEE Xplore

On the Interpretable Adversarial Sensitivity of Iterative Optimizers


Abstract:

Adversarial examples are an emerging threat of machine learning (ML) models, allowing adversaries to substantially deteriorate performance by introducing seemingly unnoti...Show More

Abstract:

Adversarial examples are an emerging threat of machine learning (ML) models, allowing adversaries to substantially deteriorate performance by introducing seemingly unnoticeable perturbations. These attacks are typically considered to be an ML risk, often associated with the black-box operation and sensitivity to features learned from data of deep neural networkss (DNNs), and are rarely viewed as a threat to classic non-learned decision rules, such as iterative optimizers. In this work we explore the sensitivity to adversarial examples of iterative optimizers, building upon recent advances in treating these methods as ML models. We identify that many iterative optimizers share the properties of end-to-end differentiability and existence of impactful small perturbations, that make them amenable to adversarial attacks. The interpretablity of iterative optimizers allows to associate adversarial examples with modifications to the traversed loss surface that notably affect the location of the sought minima. We visualize this effect and demonstrate the vulnerability of iterative optimizers for compressed sensing and hybrid beamforming tasks, showing that different optimizers tackling the same optimization formulation vary in their adversarial sensitivity.
Date of Conference: 17-20 September 2023
Date Added to IEEE Xplore: 23 October 2023
ISBN Information:

ISSN Information:

Conference Location: Rome, Italy
References is not available for this document.

1. Introduction

The unprecedented success of machine learning (ML), and particularly deep learning, gives rise to new risks and threats. A notable emerging threats is adversarial examples [1]. This attack allows an adversary to design minor, seemingly unnoticeable perturbations, which when corrupting an input to an ML model have a notable effect on its output. The last decade has witnessed an ongoing arms-race between new sophisticated adversarial attacks and the development of countermeasures aiming to mitigate sensitivity of ML models [2, 3].

Select All
1.
C. Szegedy et al., “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
2.
N. Akhtar, A. Mian, N. Kardan, and M. Shah, “Advances in adversarial attacks and defenses in computer vision: A survey,” IEEE Access, vol. 9, pp. 155 161–155 196, 2021.
3.
Y. Wang et al., “Adversarial attacks and defenses in machine learning-powered networks: A contemporary survey,” arXiv preprint arXiv:2303.06302, 2023.
4.
S. H. Silva and P. Najafirad, “Opportunities and challenges in deep learning adversarial robustness: A survey,” arXiv preprint arXiv:2007.00753, 2020.
5.
A. Ignatiev, N. Narodytska, and J. Marques-Silva, “On relating explanations and adversarial examples,” Advances in Neural Information Processing Systems, 2019.
6.
C. Zhang, P. Benz, T. Imtiaz, and I. S. Kweon, “Understanding adversarial examples from the mutual influence of images and perturbations,” in Proc. IEEE/CVF CVPR, 2020, pp. 14 521–14 530.
7.
A. Ilyas et al., “Adversarial examples are not bugs, they are features,” Advances in Neural Information Processing Systems, vol. 32, 2019.
8.
S. P. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
9.
Z.-Q. Luo and W. Yu, “An introduction to convex optimization for communications and signal processing,” IEEE J. Sel. Areas Commun., vol. 24, no. 8, pp. 1426–1438, 2006.
10.
N. Shlezinger, J. Whang, Y. C. Eldar, and A. G. Dimakis, “Model-based deep learning,” Proc. IEEE, 2023.
11.
N. Shlezinger, Y. C. Eldar, and S. P. Boyd, “Model-based deep learning: On the intersection of deep learning and optimization,” IEEE Access, vol. 10, pp. 115 384–115 398, 2022.
12.
A. Agrawal, S. Barratt, and S. Boyd, “Learning convex optimization models,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1355–1364, 2021.
13.
N. Shlezinger and T. Routtenberg, “Discriminative and generative learning for linear estimation of random signals [lecture notes],” IEEE Signal Process. Mag., 2023.
14.
I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413–1457, 2004.
15.
S. Boyd et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine learning, vol. 3, no. 1, pp. 1–122, 2011.
16.
O. Lavi and N. Shlezinger, “Learn to rapidly and robustly optimize hybrid precoding,” IEEE Trans. Commun., 2023.
17.
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
18.
A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial intelligence safety and security, 2018, pp. 99–112.
19.
P. Jain and P. Kar, “Non-convex optimization for machine learning,” Foundations and Trends® in Machine Learning, vol. 10, no. 3-4, pp. 142–363, 2017.
20.
T. Chen et al., “Learning to optimize: A primer and a benchmark,” arXiv preprint arXiv:2103.12828, 2021.
21.
M. Genzel, J. Macdonald, and M. März, “Solving inverse problems with deep neural networks-robustness included? ” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 1119–1134, 2023.
22.
A. Agrawal, S. Barratt, S. Boyd, and B. Stellato, “Learning convex optimization control policies,” in Learning for Dynamics and Control. PMLR, 2020, pp. 361–373.
23.
S. Yang and L. Hanzo, “Fifty years of MIMO detection: The road to large-scale MIMOs,” IEEE Commun. Surveys Tuts., vol. 17, no. 4, pp. 1941–1988, 2015.
24.
K. K. Thekumparampil, P. Jain, P. Netrapalli, and S. Oh, “Efficient algorithms for smooth minimax optimization,” Advances in Neural Information Processing Systems, vol. 32, 2019.
25.
R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 58, pp. 267–288, 1996.
26.
H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein, “Visualizing the loss landscape of neural nets,” Advances in Neural Information Processing Systems, vol. 31, 2018.
27.
I. Ahmed et al., “A survey on hybrid beamforming techniques in 5G: Architecture and system model perspectives,” IEEE Commun. Surveys Tuts., vol. 20, no. 4, pp. 3060–3097, 2018.
28.
S. Jaeckel, L. Raschkowski, K. Börner, and L. Thiele, “Quadriga: A 3-D multi-cell channel model with time evolution for enabling virtual field trials,” IEEE Trans. Antennas Propag., vol. 62, no. 6, pp. 3242–3256, 2014.

Contact IEEE to Subscribe

References

References is not available for this document.