In an era where data privacy is paramount, traditional centralized machine learning (ML) models pose significant risks to user confidentiality. This paper explores the potential of federated learning as a privacy-preserving solution to this challenge. Federated learning allows ML models to be trained across multiple distributed devices or servers, without sharing raw data. By maintaining the data locally and aggregating model updates, this approach mitigates the risks associated with data breaches and unauthorized access. We discuss the theoretical foundations of federated learning, the various techniques for secure aggregation and communication, and the practical challenges in implementing such systems. Furthermore, we present a comprehensive review of existing federated learning models, analyzing their strengths and limitations. The paper concludes with a forward-looking perspective on the future directions of privacy-preserving ML, highlighting the potential for wider adoption and the need for robust standards to ensure the effectiveness and security of these systems.
Martin, D. Privacy-Preserving Machine Learning Models Using Federated Learning. Information Sciences and Technological Innovations, 2023, 5, 39. https://doi.org/10.69610/j.isti.20230412
AMA Style
Martin D. Privacy-Preserving Machine Learning Models Using Federated Learning. Information Sciences and Technological Innovations; 2023, 5(1):39. https://doi.org/10.69610/j.isti.20230412
Chicago/Turabian Style
Martin, Daniel 2023. "Privacy-Preserving Machine Learning Models Using Federated Learning" Information Sciences and Technological Innovations 5, no.1:39. https://doi.org/10.69610/j.isti.20230412
APA style
Martin, D. (2023). Privacy-Preserving Machine Learning Models Using Federated Learning. Information Sciences and Technological Innovations, 5(1), 39. https://doi.org/10.69610/j.isti.20230412
Article Metrics
Article Access Statistics
References
Burbules, N. C., & Callister, T. A. (2000). Watch IT: The Risks and Promises of Information Technologies for Education. Westview Press.
, B. (2001). Information Warfare: Principles and Practice. John Wiley & Sons.
Cuzzocrea, A., Mircea, L., Nappi, M., & Venturini, G. (2009). A privacy-preserving data mining framework based on data anonymization and encryption techniques. In Proceedings of the 2009 International Conference on Privacy, Security, Risk and Trust (pp. 1-2).
Samarati, P. (1998). k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(2), 177-188.
Konecny, A., McMahan, B. B., Daley, M., & Duchi, J. (2016). Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1602.05629.
Xiao, Y., Zhang, Y., & Liu, Z. (2017). Challenges and opportunities in federated learning. arXiv preprint arXiv:1701.04538.
Dwork, C. (2006). Differential privacy: A survey of results. In Theory of Cryptography (pp. 1-85).
Fawcett, T., Kohavi, R., & Provost, F. (2014). The Page-Hinkley test and k-anonymity. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1393-1401).
Canini, M., Vinyals, O., & Antoniou, A. (2017). FedScope: A comprehensive survey of federated learning. arXiv preprint arXiv:1701.03289.
Shokri, R., Stronati, M., & Shmatikov, V. (2017). Federated learning: Local updates global model. arXiv preprint arXiv:1701.03289.
Li, X., Chen, Y., Gao, Y., & Liu, H. (2016). A distributed optimization algorithm for federated learning. arXiv preprint arXiv:1602.05629.
Guo, Z., Han, Z., Zhang, Z., Xie, B., & Sun, J. (2017). Federated learning for machine learning in the Internet of Things. arXiv preprint arXiv:1701.03289.