원문정보
초록
영어
As Artificial Intelligence (AI) continues to permeate various sectors such as healthcare, finance, and transportation, the importance of securing AI systems against emerging threats has become increasingly critical. The proliferation of AI across these industries not only introduces opportunities for innovation but also exposes vulnerabilities that could be exploited by malicious actors. This comprehensive review delves into the current landscape of AI security, providing an in-depth analysis of the threats, challenges, and mitigation strategies associated with AI technologies. The paper discusses key threats such as adversarial attacks, data poisoning, and model inversion, all of which can severely compromise the integrity, confidentiality, and availability of AI systems. Additionally, the paper explores the challenges posed by the inherent complexity and opacity of AI models, particularly deep learning networks. The review also evaluates various mitigation strategies, including adversarial training, differential privacy, and federated learning, that have been developed to safeguard AI systems. By synthesizing recent advancements and identifying gaps in existing research, this paper aims to guide future efforts in enhancing the security of AI applications, ultimately ensuring their safe and ethical deployment in both critical and everyday environments.
목차
1. INTRODUCTION
2. THREATS TO AI SYSTEMS
2.1 Adversarial Attacks
3. CHALLENGES IN AI SECURITY
3.1 Explainability and Transparency
3.2 Robustness and Reliability
3.3 Ethical and Legal Issues
4. MITIGATION STRATEGIES
5. FUTURE DIRECTIONS
5.1 Advanced Defense Mechanisms
5.2 Interdisciplinary Research
5.3 Policy and Regulation
5.4 Explainable AI (XAI)
5.5 Continuous Monitoring
6. DISCUSSION
7. CONCLUSION
ACKNOWLEDGEMENT
REFERENCES
