원문정보
초록
영어
With the development of artificial intelligence technology, face recognition systems based on deep neural networks are widely used in security monitoring, identity authentication, and human-computer interaction. However, recent studies have shown that face recognition systems are not fully prepared for deploymentlevel adversarial attacks, and adversarial samples can undermine the integrity and availability of face recognition systems by poisoning datasets. We demonstrate how attackers can undermine the reliability of face recognition systems by injecting crafted adversarial images into test data. In addition, the article will introduce strategies to defend against such attacks by mitigating performance degradation through defensive distillation methods. By conducting an empirical evaluation of face recognition systems with and without defense mechanisms, we show the impact on face recognition performance to ensure the integrity of the article.
목차
1. Introduction
2. Related Work
2.1 ArcFace FRS Model
2.2 Adversarial Attacks
2.3 Comparison with Existing Adversarial Attack and Defense Methods
3. Methodology
3.1 Training Data
3.2 Experimental model
3.3 Defensive Distillation
3.4 Performance indicators
4.Experiment
5. Conclusion
Acknowledgement
REFERENCES
