원문정보
초록
영어
Artificial intelligence (AI) systems are increasingly becoming integral components of several industries. However, they are vulnerable to adversarial attacks. Through adversarial attacks, the behavior of AI models can be manipulated leading to severe consequences that require effective approaches. In this paper we explore how the guidelines outlined in International Organization for Standardization and the International Electrotechnical Commission (ISO/IEC) can be applied to enhance the security and resilience of AI systems against adversarial attacks. We propose a novel framework that can effectively detect and mitigate such attacks thus ensuring robust AI system deployment. Our proposed framework consists of three major sections of Risk Management, System Architecture and Design as well as Continuous Monitoring and Improvement thus aligning with ISO/IEC standards. Furthermore, our research addresses the gap between theoretical understanding and practical implementations, provides detailed strategies and real-world case studies making it a robust framework for mitigating adversarial threats and enhancing AI security.
목차
1.Introduction
2.Background and Motivation
3.Proposed Framework
3.1.Risk Management
4.Implementation and Performance Evaluation
5.Conclusion
Acknowledgement
References
