earticle

논문검색

Other IT related Technology

Adversarial sample poisoning and security enhancement strategies for deep neural network face recognition systems

초록

영어

With the development of artificial intelligence technology, face recognition systems based on deep neural networks are widely used in security monitoring, identity authentication, and human-computer interaction. However, recent studies have shown that face recognition systems are not fully prepared for deploymentlevel adversarial attacks, and adversarial samples can undermine the integrity and availability of face recognition systems by poisoning datasets. We demonstrate how attackers can undermine the reliability of face recognition systems by injecting crafted adversarial images into test data. In addition, the article will introduce strategies to defend against such attacks by mitigating performance degradation through defensive distillation methods. By conducting an empirical evaluation of face recognition systems with and without defense mechanisms, we show the impact on face recognition performance to ensure the integrity of the article.

목차

Abstract
1. Introduction
2. Related Work
2.1 ArcFace FRS Model
2.2 Adversarial Attacks
2.3 Comparison with Existing Adversarial Attack and Defense Methods
3. Methodology
3.1 Training Data
3.2 Experimental model
3.3 Defensive Distillation
3.4 Performance indicators
4.Experiment
5. Conclusion
Acknowledgement
REFERENCES

저자정보

  • Jinquan Ju Department of Computer Engineering, Dongseo University, Busan, Korea
  • Hoon Jae Lee Professor of Department Information Secutity of Dongseo University, Korea
  • ByungGook Lee Professor, Dept. Computer Engineering, Dongseo University, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.