earticle

논문검색

<특집 : 인공지능시대의 바둑>

Explaining Go : Challenges in Achieving Explainability in AI Go Programs

원문정보

초록

영어

There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder they are to explain. Go playing AIs like AlphaGo and KataGo provide fantastic examples of this phenomenon. In this paper, I discuss a non-exhaustive list of the leading theories of explanation and what each of these theories would say about the explainability of AIplayed moves of Go. Finally, I consider the possibility of ever explaining AIplayed Go moves in a way that meets the four principles of XAI. I conclude, somewhat pessimistically, that Go is not as imminently explainable as other domains. As such, the probability of having an XAI for Go that meets the four principles is low.

목차

Abstract:
I. Introduction
II. Explaining AI
III. Theories of Explanation
IV. DN, IS, and AlphaGo
V. The Statistical Relevance of Go Moves
VI. Reading the Future
VII. Useful Explanations
References

저자정보

  • Zack Garrett Excelsior Classical Academy, USA

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      0개의 논문이 장바구니에 담겼습니다.