원문정보
초록
영어
This paper focuses on improving accuracy in constrained computing settings by employing the ReLU (Rectified Linear Unit) activation function. The research conducted involves modifying parameters of the ReLU function and comparing performance in terms of accuracy and computational time. This paper specifically focuses on optimizing ReLU in the context of a Multilayer Perceptron (MLP) by determining the ideal values for features such as the dimensions of the linear layers and the learning rate (Ir). In order to optimize performance, the paper experiments with adjusting parameters like the size dimensions of linear layers and Ir values to induce the best performance outcomes. The experimental results show that using ReLU alone yielded the highest accuracy of 96.7% when the dimension sizes were 30 - 10 and the Ir value was 1. When combining ReLU with the Adam optimizer, the optimal model configuration had dimension sizes of 60 - 40 - 10, and an Ir value of 0.001, which resulted in the highest accuracy of 97.07%.
목차
1. Introduction
2. Training Model System
2.1 MLP
2.2 Input Layer, Hidden Layers, Output Layer
2.3 Activation Functions
2.4 Loss Functions
3. Paper title and author information
3.1 Fashion MNIST Dataset
3.2 ReLU
3.3 Adam
3.4 Tuning ReLU Parameters for Performance Improvement
3.5 Tuning Parameters for the 3-Layer ReLU + Adam Combined Model
4. Conclusion and Future Work
References