Robustness of Machine Learning Models Against Adversarial Perturbations: Theoretical Foundations and Practical Implementations

Main Article Content

Amina Fatima Mohamed Abdelrahman
Tarek Ahmed Ibrahim Abdelaziz

Abstract

Machine learning models have achieved remarkable success in various domains, but their vulnerability to adversarial perturbations poses significant challenges to their robustness and trustworthiness. Adversarial perturbations are carefully crafted input modifications that can cause models to produce incorrect outputs, despite being imperceptible to human observers. This paper explores the robustness of machine learning models against such adversarial attacks, delving into both the theoretical foundations and practical implementations to mitigate these vulnerabilities. The theoretical aspects cover the high-dimensional nature of models, geometric properties of decision boundaries, the trade-off between accuracy and robustness, and connections to other domains like game theory and optimization. Practical implementations discuss defensive strategies such as adversarial training, input transformations, model ensembling, and certified defenses. The paper also highlights the challenges and open research directions in developing robust and secure machine learning systems.

Downloads

Download data is not yet available.

Article Details

How to Cite
Robustness of Machine Learning Models Against Adversarial Perturbations: Theoretical Foundations and Practical Implementations. (2023). International Journal of Machine Intelligence for Smart Applications, 13(10), 1-10. https://dljournals.com/index.php/IJMISA/article/view/10
Section
Articles

How to Cite

Robustness of Machine Learning Models Against Adversarial Perturbations: Theoretical Foundations and Practical Implementations. (2023). International Journal of Machine Intelligence for Smart Applications, 13(10), 1-10. https://dljournals.com/index.php/IJMISA/article/view/10

Most read articles by the same author(s)

1 2 3 4 > >>