earticle

논문검색

Other IT related Technology

Noise, Reverse, Amplify, Attenuate, shift-based Audio Attack Model

초록

영어

This study aims to analyze various types of audio recognition attacks (Noise, Reverse, Amplify, Attenuate, Shift, etc.) faced by AI-based speech recognition systems and their effects. Through experiments, the impact of each attack type on the output of the speech model was quantitatively evaluated using performance metrics such as MSE, MAE, SNR, CrossCorrMax, CosineSim, PearsonCorr, Emotion_Label, and Emotion_Score. The results showed that Reverse and Shift attacks severely degraded the emotion classification and reliability of the speech recognition model, while Amplify and Attenuate attacks caused subtle but significant changes in emotion labels. This study aims to analyze various types of audio recognition attacks (Noise, Reverse, Amplify, Attenuate, Shift, etc.) faced by AI-based speech recognition systems and their effects. Through experiments, the impact of each attack type on the speech model's output was quantitatively evaluated using evaluation metrics such as MSE, MAE, SNR, CrossCorrMax, CosineSim, PearsonCorr, Emotion_Label, and Emotion_Score. The results showed that Reverse and Shift attacks significantly degraded the emotion classification and reliability of the speech recognition model, while Amplify and Attenuate attacks caused subtle but important changes in emotion labels.

목차

Abstract
1. Introduction
2. Related Research
3. Audio adversarial attack model
4. Experiments and results of NRAAS attack models
4.1 Adversarial attack models of Noise, Reverse, Amplify, Attenuate, Shift audio recognition
4.2 Test & evaluation for adversarial attack models for NRAAS
4.3 Defense Strategies
4. Conclusions
Acknowledgement
References

저자정보

  • Jin-keun Hong Professor, Division of Advanced IT / X-Tec, Baekseok University, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.