I. Introduction
Data privacy is a crucial aspect of data generation, storage, and processing within the context of increasing emphasis on data security. In the past decade, numerous methods have been developed to protect privacy. Traditional approaches rely on anonymization techniques [1]. Examples of such methods include HybrEx [2], -anonymity [3], -closeness [4], and -diversity [5], aiming to render each released dataset indistinguishable concerning (w.r.t.) a minimum number of individuals in the population. These techniques safeguard published datasets from identity disclosure. However, the anonymization data remains a problem because it is hard to analyze and can't maintain the relationship between the data. Another category of privacy protection methods that allow for analysis employs encryption techniques, including garbled circuits [6], homomorphic encryption [7], secret sharing [8], and others. More recent advancements in privacy protection involve noise-based algorithms, such as differential privacy [9], [10]. These techniques necessitate that the result of analyses conducted on a released dataset remains insensitive to the insertion or deletion of a tuple in the dataset. However, previous data privacy protection methods typically perturb the true value of the data, thus affecting its usability and accuracy.