Abstract:
Transparency has become a critical need in machine learning (ML) applications. Designing transparent ML models helps increase trust, ensure accountability, and scrutinize...Show MoreMetadata
Abstract:
Transparency has become a critical need in machine learning (ML) applications. Designing transparent ML models helps increase trust, ensure accountability, and scrutinize fairness. Some organizations may opt-out of transparency to protect individuals’ privacy. Therefore, there is a great demand for transparency models that consider both privacy and security risks. Such transparency models can motivate organizations to improve their credibility by making the ML-based decision-making process comprehensible to end-users. Differential privacy (DP) provides an important technique to disclose information while protecting individual privacy. However, it has been shown that DP alone cannot prevent certain types of privacy attacks against disclosed ML models. DP with low \epsilon values can provide high privacy guarantees, but may result in significantly weaker ML models in terms of accuracy. On the other hand, setting \epsilon value too high may lead to successful privacy attacks. This raises the question whether we can disclose accurate transparent ML models while preserving privacy. In this article we introduce a novel technique that complements DP to ensure model transparency and accuracy while being robust against model inversion attacks. We show that combining the proposed technique with DP provide highly transparent and accurate ML models while preserving privacy against model inversion attacks.
Published in: IEEE Transactions on Dependable and Secure Computing ( Volume: 18, Issue: 5, 01 Sept.-Oct. 2021)