Loading [MathJax]/extensions/MathMenu.js
Reducing Model Memorization to Mitigate Membership Inference Attacks | IEEE Conference Publication | IEEE Xplore

Reducing Model Memorization to Mitigate Membership Inference Attacks


Abstract:

Given a machine learning model and a record, membership inference attacks determine whether this record was used as part of the model’s training dataset. This can raise p...Show More

Abstract:

Given a machine learning model and a record, membership inference attacks determine whether this record was used as part of the model’s training dataset. This can raise privacy issues. There is a desideratum to provide robust mitigation techniques against this attack that will not affect utility. One of the state-of-the-art frameworks in this area is SELENA, which has two phases: Split-AI and Self-Distillation to train a protected model. In this paper, we introduce a novel approach to the Split-AI phase, which tries to weaken the membership inference by using the Jacobian matrix norm and entropy. We experimentally demonstrate that our approach can decrease the memorization of the machine-learning model for three datasets: Purchase100, CIFAR-10, and SVHN, more than SELENA in the same range of utility in a setting in which we do not know any member of the training data.
Date of Conference: 01-03 November 2023
Date Added to IEEE Xplore: 29 May 2024
ISBN Information:

ISSN Information:

Conference Location: Exeter, United Kingdom

Contact IEEE to Subscribe

References

References is not available for this document.