원문정보
초록
영어
As sensor technologies and image processing technologies make collecting information on users’ behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users’ emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.
목차
Ⅰ. Introduction
Ⅱ. Related Work
2.1. Theories of Emotion
2.2. Emotion Recognition
Ⅲ. An Emotion Recognition Model
3.1. Step 1: Data Collection
3.2. Step 2: Data Preprocessing
3.3. Step 3: ANN Modeling for Emotion Recognition
3.4. Step 4: Validation of the Model
Ⅳ. Empirical Analysis
4.1. Data Set
4.2. Experimental Design
4.3. Experiment Result
Ⅴ. Conclusion