earticle

논문검색

Multimodal Parametric Fusion for Emotion Recognition

초록

영어

The main objective of this study is to investigate the impact of additional modalities on the performance of emotion recognition using speech, facial expression and physiological measurements. In order to compare different approaches, we designed a feature-based recognition system as a benchmark which carries out linear supervised classification followed by the leave-one-out cross-validation. For the classification of four emotions, it turned out that bimodal fusion in our experiment improves recognition accuracy of unimodal approach, while the performance of trimodal fusion varies strongly depending on the individual. Furthermore, we experienced extremely high disparity between single class recognition rates, while we could not observe a best performing single modality in our experiment. Based on these observations, we developed a novel fusion method, called parametric decision fusion (PDF), which lies in building emotion-specific classifiers and exploits advantage of a parametrized decision process. By using the PDF scheme we achieved 16% improvement in accuracy of subject-dependent recognition and 10% for subject-independent recognition compared to the best unimodal results.

목차

Abstract
1. Introduction
2. Related Work
3. Trimodal Dataset
3.1 Experimental Setting
3.2 Collected Sensor Data
4. General Methodology and Result
4.1 Multimodal Feature Calculation
4.2 Classification
4.3 Recognition Results
5. Parametric Decision Fusion
5.1 Building Dichotomous Classifiers
5.2 Cascaded Specialists Algorithm (CSA)
5.3 Making Decision
5.4 Results
6. Conclusion
References

저자정보

  • Jonghwa Kim Professor, Dept. Intelligent System Engineering, Cheju Halla University, Jeju Island, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.