earticle

논문검색

Human-Machine Interaction Technology (HIT)

Optimization of Model based on Relu Activation Function in MLP Neural Network Model

초록

영어

This paper focuses on improving accuracy in constrained computing settings by employing the ReLU (Rectified Linear Unit) activation function. The research conducted involves modifying parameters of the ReLU function and comparing performance in terms of accuracy and computational time. This paper specifically focuses on optimizing ReLU in the context of a Multilayer Perceptron (MLP) by determining the ideal values for features such as the dimensions of the linear layers and the learning rate (Ir). In order to optimize performance, the paper experiments with adjusting parameters like the size dimensions of linear layers and Ir values to induce the best performance outcomes. The experimental results show that using ReLU alone yielded the highest accuracy of 96.7% when the dimension sizes were 30 - 10 and the Ir value was 1. When combining ReLU with the Adam optimizer, the optimal model configuration had dimension sizes of 60 - 40 - 10, and an Ir value of 0.001, which resulted in the highest accuracy of 97.07%.

목차

Abstract
1. Introduction
2. Training Model System
2.1 MLP
2.2 Input Layer, Hidden Layers, Output Layer
2.3 Activation Functions
2.4 Loss Functions
3. Paper title and author information
3.1 Fashion MNIST Dataset
3.2 ReLU
3.3 Adam
3.4 Tuning ReLU Parameters for Performance Improvement
3.5 Tuning Parameters for the 3-Layer ReLU + Adam Combined Model
4. Conclusion and Future Work
References

저자정보

  • Ye Rim Youn Student, Division of Computer Engineering, Baekseok University, South Korea
  • Jinkeun Hong Professor, Division of Advanced IT, Beakseok University, South Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 기관로그인 시 무료 이용이 가능합니다.

      • 4,000원

      0개의 논문이 장바구니에 담겼습니다.