Abstract:
The rapid adoption of machine learning (ML) across several businesses raises serious concerns about data privacy, particularly when sensitive data is involved. By integra...Show MoreMetadata
Abstract:
The rapid adoption of machine learning (ML) across several businesses raises serious concerns about data privacy, particularly when sensitive data is involved. By integrating trustworthy techniques in the preprocessing, feature selection, and classification stages, this research provides a comprehensive methodology for developing machine learning models that preserve privacy. Preprocessing uses data anonymisation techniques like k-anonymity and 1-diversity to protect sensitive properties and maintain data utility. Privacy-aware feature selection, or PAFS, is used to identify and retain characteristics that maximise model performance without compromising privacy. In the classification stage, privacy-preserving neural networks handle anonymised or encrypted input while using secure computing frameworks to provide accurate predictions. For privacy-sensitive applications like healthcare and banking, this integrated approach provides a scalable solution, tackling significant challenges in finding a compromise between privacy and usefulness. Experimental results illustrate the utility of the proposed framework and how it could support ethical and safe machine learning in real-world applications.
Published in: 2025 International Conference on Multi-Agent Systems for Collaborative Intelligence (ICMSCI)
Date of Conference: 20-22 January 2025
Date Added to IEEE Xplore: 27 February 2025
ISBN Information: