Abstract:
Machine unlearning refers to mechanisms that can remove the influence of a subset of training data upon request from a trained model without incurring the cost of re-trai...Show MoreMetadata
Abstract:
Machine unlearning refers to mechanisms that can remove the influence of a subset of training data upon request from a trained model without incurring the cost of re-training from scratch. This paper develops a unified PAC-Bayesian framework for machine unlearning that recovers the two recent design principles - variational unlearning [1] and forgetting Lagrangian [2] as information risk minimization problems [3]. Accordingly, both criteria can be interpreted as PAC-Bayesian upper bounds on the test loss of the unlearned model that take the form of free energy metrics.
Published in: 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)
Date of Conference: 25-28 October 2021
Date Added to IEEE Xplore: 15 November 2021
ISBN Information:
Print on Demand(PoD) ISSN: 1551-2541
Funding Agency:
References is not available for this document.
Select All
1.
Quoc Phong Nguyen, Bryan Kian Hsiang Low, and Patrick Jaillet, “Variational bayesian unlearning,” Advances in Neural Information Processing Systems, vol. 33, 2020.
2.
Aditya Golatkar, Alessandro Achille, and Stefano Soatto, “Eternal sunshine of the spotless net: Selective forgetting in deep networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9304–9312.
3.
Tong Zhang, “Information-Theoretic Upper and Lower Bounds for Statistical Estimation,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1307–1321, 2006.
4.
Nicholas Carlini, Chang Liu, Ulfar Erlingsson, Jernej Kos, and Dawn Song, “The secret sharer: Evaluating and testing unintended memorization in neural networks,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), 2019, pp. 267–284.
5.
Yinzhi Cao and Junfeng Yang, “Towards making systems forget with machine unlearning,” in 2015 IEEE Symposium on Security and Privacy. IEEE, 2015, pp. 463–480.
6.
Antonio Ginart, Melody Y Guan, Gregory Valiant, and James Zou, “Making ai forget you: Data deletion in machine learning,” arXiv preprint arXiv: 1907.05012, 2019.
7.
Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot, “Machine unlearning,” arXiv preprint arXiv: 1912.03817, 2019.
8.
David A McAllester, “PAC-Bayesian Model Averaging,” in Proc. of Annual Conf. Computational Learning Theory (COLT), July 1999, pp. 164–170.
9.
Pascal Germain, Alexandre Lacasse, Francois Lavio-lette, and Mario Marchand, “Pac-bayesian learning of linear classifiers,” in Proceedings of the 26th Annual International Conference on Machine Learning, 2009, pp. 353–360.
10.
Sharu Theresa Jose and Osvaldo Simeone, “Free energy minimization: A unified framework for modeling, inference, learning, and optimization [lecture notes],” IEEE Signal Processing Magazine, vol. 38, no. 2, pp. 120–125, 2021.
11.
Omar Rivasplata, Ilja Kuzborskij, Csaba Szepesvári, and John Shawe-Taylor, “Pac-bayes analysis beyond the usual bounds,” arXiv preprint arXiv: 2006.13057, 2020.
12.
Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh, “Remember what you want to forget: Algorithms for machine unlearning,” ar Xiv preprint arXiv: 2103. 03279, 2021.
13.
David A McAllester, “PAC-Bayesian stochastic model selection,” Machine Learning, vol. 51, no. 1, pp. 5–21, 2003.
14.
Benjamin Guedj, “A primer on PAC-Bayesian learning,” arXiv preprint arXiv: 1901.05353, 2019.
15.
Gintare Karolina Dziugaite and Daniel M Roy, “Data-dependent pac-bayes priors via differential privacy,” arXiv preprint arXiv: 1802.09583, 2018.