earticle

논문검색

Poster Session Ⅱ : Artificial Intelligence / IoT & Big Data

A simple new sparsity check method for L-infinity Attack

초록

영어

There are various adversarial attacks on neural networks. Many previous studies have reported that neural networks are vulnerable to those adversarial attacks. In the perspective of safety, we have to evaluate a model's robustness against those attacks. However, it is not simple to determine whether a model is robust to adversarial attacks since there exist multiple properties for a model. Mostly, we have measured robustness with accuracy against those attacks so far. In a recent study, it was shown that this measurement methodology is not sufficient to capture all the robustness properties. In this paper, we present a simple new metric that captures sparsity, a robustness property against L-infinity sparse attacks. Through several experiments we show that there is necessity to use diverse metric to evaluate a model's robustness and our new metric works well.

목차

Abstract
I. INTRODUCTION
II. RELATED WORK
A. Adversarial Attacks
B. White Box Attack and Black Box Attack
C. Projected Gradient Descent Attack
D. Adversarial Accuracy
E. Sparse attack
III. METHODOLOGY
A. Generating L-infinity attack
B. Sparse attack in L-infinity
C. Pipeline
D. Sparsity check method
IV. DATASET DESCRIPTION
V. EXPERIMENTS
A. Comparison with Adversarial Accuracy
B. Sparsity with Various Epsilon
VI. CONCLUSIONS & FUTURE WORKS
REFERENCES

저자정보

  • JuHoon Park Dept. of Computer Science and Engineering Sogang University
  • DongHee Han Dept. of Electronic Engineering Sogang University
  • UnSang Park Dept. of Computer Science and Engineering Sogang University

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      0개의 논문이 장바구니에 담겼습니다.