원문정보
초록
영어
Since the emergence of ChatGPT, transformer-based language models have become highly popular. This study utilizes a transformer-based approach to measure Korean sentence similarity for software testing. By doing so, we propose test cases using metamorphic relationships. The performance of the transformer models is then compared using similarity measures. First, we create a test set by transforming sentences from the Defense Daily according to specific rules. We then input these transformed sentences into the RoBERTa, Electra, and T5 models. We check whether the similarity measure between the original sentence and its variant satisfies the metamorphic relationship. The performance of each model is then compared using a similarity measure. In our experiments, the RoBERTa model satisfied metamorphic relations in MR5 and MR6, which involved transforming nouns and verbs into synonyms, and in MR7, which involved altering sentence order, with accuracy rates of 80%, 85%, and 88%, respectively. All tests passed except MR7 (77%) for the Electra model and MR1 (73%) for the T5 model. Finally, we compared the performance of each model. In the comparison, the Electra model outperformed the T5 model (99.96%) and the RoBERTa model (99.62%) with an accuracy of 99.97%.
목차
1. INTRODUCTION
2. RELATED RESEARCH
2.1 SW Testing of AI-based System
2.2 Transformer Language Model
2.3 Existing Korean Natural Language Processing Research
3. Test Case Generation
4. EXPERIMENT AND ANALYSIS
4.1 Experiment
4.2 Result Analysis / Model Performance Comparison
5. CONCLUSION
REFERENCES
