Loading [MathJax]/extensions/MathMenu.js
Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability | IEEE Conference Publication | IEEE Xplore

Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability


Abstract:

In this paper, we study model inversion attribute inference (MIAI), a machine learning (ML) privacy attack that aims to infer sensitive information about the training dat...Show More

Abstract:

In this paper, we study model inversion attribute inference (MIAI), a machine learning (ML) privacy attack that aims to infer sensitive information about the training data given access to the target ML model. We design a novel black-box MIAI attack that assumes the least adversary knowledge/capabilities to date while still performing similarly to the state-of-the-art attacks. Further, we extensively analyze the disparate vulnerability property of our proposed MIAI attack, i.e., elevated vulnerabilities of specific groups in the training dataset (grouped by gender, race, etc.) to model inversion attacks. First, we investigate existing ML privacy defense techniques- (1) mutual information regularization, and (2) fairness constraints, and show that none of these techniques can mitigate MIAI disparity. Second, we empirically identify possible disparity factors and discuss potential ways to mitigate disparity in MIAI attacks. Finally, we demonstrate our findings by extensively evaluating our attack in estimating binary and multi-class sensitive attributes on three different target models trained on three real datasets.
Date of Conference: 08-10 February 2023
Date Added to IEEE Xplore: 01 June 2023
ISBN Information:
Conference Location: Raleigh, NC, USA

References

References is not available for this document.