Privacy-Preserving Artificial Intelligence: Principles, Methods, Applications, and Challenges

https://doi.org/10.48185/jaai.v6i2.1918

Authors

  • OLAYEMI OLASEHINDE Department of Computer Science,, University of Huddersfield, Huddersfield, United Kingdom
  • Boniface Kayode Alese Department of Cyber Security, Federal University of Technology, Akure, Nigeria
  • Ojonukpe Eqwuche Department of Computer Science, Federal Polytechnic, Ile-Oluji Nigeria.

Keywords:

federated learning, Data Privacy, secure computation, Homomorphic Encryption, Differential Privacy, distributed learning

Abstract

Artificial intelligence systems increasingly rely on large volumes of sensitive data to support decision making in domains such as healthcare, finance, education, and public administration. While these systems offer substantial benefits, their growing dependence on personal information has intensified concerns about privacy, data misuse, and loss of public trust. This study examines privacy-preserving artificial intelligence as a design approach that enables meaningful data analysis while limiting exposure of sensitive information. The paper analyses key privacy-preserving techniques, including federated learning, differential privacy, homomorphic encryption, and secure multi party computation, and evaluates their relevance across critical application domains. It also identifies practical challenges related to computational cost, data heterogeneity, regulatory compliance, and explainability. The study shows that privacy-preserving methods can support responsible and trustworthy artificial intelligence when privacy, utility, and governance considerations are addressed together.

Downloads

Download data is not yet available.

References

N. Papernot, S. Song, I. Mironov, A. Raghunathan, T. Steinke, and K. Talwar, “SoK: Machine learning with membership inference,” in Proc. IEEE Symp. Security and Privacy (SP), San Francisco, CA, USA, 2021, pp. 1–18, doi: 10.1109/SP40001.2021.00067.

R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Security and Privacy (SP), San Jose, CA, USA, 2017, pp. 3–18, doi: 10.1109/SP.2017.41.

N. Carlini et al., “Extracting training data from large language models,” in Proc. USENIX Security Symp., Anaheim, CA, USA, 2023.

European Union, “Regulation (EU) 2016/679 (General Data Protection Regulation),” Official Journal of the European Union, 2016. [Online]. Available: https://eur-lex.europa.eu/eli/reg/2016/679/oj

U.S. Department of Health and Human Services, “HIPAA Privacy Rule,” 2023. [Online]. Available: https://www.hhs.gov/hipaa/for-professionals/privacy/index.html

C. Dwork and A. Roth, The Algorithmic Foundations of Differential Privacy. Hanover, MA, USA: Now Publishers, 2014, doi: 10.1561/0400000042.

A. Acar, H. Aksu, A. S. Uluagac, and M. Conti, “A survey on homomorphic encryption schemes: Theory and implementation,” IEEE Commun. Surveys Tuts., vol. 23, no. 4, pp. 2042–2079, 2021, doi: 10.1109/COMST.2021.3061993.

P. Kairouz et al., “Advances and open problems in federated learning,” Found. Trends Mach. Learn., vol. 14, no. 1–2, pp. 1–210, 2021, doi: 10.1561/2200000083.

L. Floridi and J. Cowls, “A unified framework of five principles for AI in society,” Harvard Data Science Review, vol. 3, no. 1, 2021, doi: 10.1162/99608f92.e0c76165.

L. Sweeney, “Only you, your doctor, and many others may know,” Technology Science, 2015.

OECD, “OECD Recommendation of the Council on Artificial Intelligence,” OECD Publishing, Paris, France, 2019.

B. Mittelstadt, “Principles alone cannot guarantee ethical AI,” Nat. Mach. Intell., vol. 1, pp. 501–507, 2019, doi: 10.1038/s42256-019-0114-4.

N. Rieke et al., “The future of digital health with federated learning,” npj Digital Medicine, vol. 3, art. 119, 2020, doi: 10.1038/s41746-020-00323-1.

Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, art. 12, 2019, doi: 10.1145/3298981.

X. Zhou et al., “Efficient homomorphic encryption for privacy-preserving machine learning: Recent advances and challenges,” IEEE Access, 2023.

L. Lyu, H. Yu, and Q. Yang, “Threats to federated learning: A survey,” ACM Comput. Surv., vol. 54, no. 5, art. 104, 2022, doi: 10.1145/3464421.

S. Jayaraman and D. Evans, “Evaluating differentially private machine learning in practice,” in Proc. USENIX Security Symp., Santa Clara, CA, USA, 2019.

M. J. Sheller et al., “Federated learning in medicine without sharing patient data,” Sci. Rep., vol. 10, art. 12598, 2020, doi: 10.1038/s41598-020-69250-1.

L. Mouchet et al., “Privacy-preserving federated analytics for financial services using homomorphic encryption,” IEEE Security Privacy, 2021.

J. Morley, L. Floridi, L. Kinsey, and A. Elhalal, “From what to how: An overview of AI ethics tools, methods and research,” Sci. Eng. Ethics, vol. 26, pp. 2141–2168, 2020, doi: 10.1007/s11948-019-00160-4.

A. D. Selbst et al., “Fairness and abstraction in sociotechnical systems,” in Proc. FAT, Atlanta, GA, USA, 2019, pp. 59–68, doi: 10.1145/3287560.3287598.

S. Gürses and J. van Hoboken, “Privacy engineering,” IEEE Security Privacy, vol. 20, no. 2, pp. 17–27, 2022, doi: 10.1109/MSEC.2021.3136150.

K. El Emam, L. Mosquera, and R. Hoptroff, Practical Approaches to Anonymization. Sebastopol, CA, USA: O’Reilly Media, 2020.

L. Rocher, J. M. Hendrickx, and Y. A. de Montjoye, “Estimating the success of re-identifications in incomplete datasets using generative models,” Nat. Commun., vol. 10, art. 3069, 2019, doi: 10.1038/s41467-019-10933-3.

B. Knott, A. Ciccozzi, M. Niepert, and F. Schuster, “Secure multi-party computation for machine learning,” IEEE Trans. Inf. Forensics Security, vol. 16, pp. 389–403, 2021, doi: 10.1109/TIFS.2020.3042227.

S. Barocas, M. Hardt, and A. Narayanan, Fairness and Machine Learning. Cambridge, MA, USA: MIT Press, 2023.

J. Tang et al., “Privacy loss in Apple’s implementation of differential privacy,” in Proc. ACM CCS, Dallas, TX, USA, 2017, doi: 10.1145/3133956.3134017.

W. Yang et al., “Homomorphic encryption for privacy-preserving healthcare applications: A review,” IEEE Rev. Biomed. Eng., 2023.

L. Xie et al., “Synthetic data generation for privacy protection: A review,” Pattern Recognit., vol. 138, 2023.

P. Prinsloo and S. Slade, “An ethics of care approach to learning analytics,” Br. J. Educ. Technol., vol. 48, no. 4, pp. 1238–1250, 2017.

N. Li et al., “Federated learning for COVID-19 chest X-ray classification,” IEEE Trans. Med. Imaging, 2020.

Y. Zhang and Q. Zhu, “Privacy-preserving machine learning in financial services,” IEEE Security Privacy, 2022.

I. Bogdanov et al., “Secure multi-party computation for privacy-preserving credit scoring,” IEEE Security Privacy, 2014.

M. Khalil, R. Shakya, and Q. Liu, “Towards privacy-preserving data-driven education: The potential of federated learning,” Proceedings of the ACM Conference on Learning Analytics and Knowledge (LAK), 2024, pp. 1–11.

M. Abadi et al., “Deep learning with differential privacy,” in Proc. ACM CCS, Vienna, Austria, 2016, doi: 10.1145/2976749.2978318.

Published

2025-12-31

How to Cite

OLASEHINDE, O., Alese, B. K., & Eqwuche, O. . (2025). Privacy-Preserving Artificial Intelligence: Principles, Methods, Applications, and Challenges. Journal of Applied Artificial Intelligence, 6(2), 60–70. https://doi.org/10.48185/jaai.v6i2.1918

Issue

Section

Articles