earticle

논문검색

Emotion Recognition from Speech Signals using Fractal Features

초록

영어

In early research the basic acoustic features were the primary choices for emotion recognition from speech. Most of the feature vectors were composed with the simple extracted pitch-related, intensity related, and duration related attributes, such as maximum, minimum, median, range and variability values. However, researchers are still debating what features influence the recognition of emotion in speech. In this paper, we propose a new method to recognize the emotion from speech signals using fractal dimension features. The fractal feature indicates the non-linearity and self-similarity of a speech signal. For classification and recognition purposes we used the Support Vector Machine technique. In our experiment, a standard database, the Berlin Emotional Speech Database is used as input to measure the effectiveness of our method. By using these features, the obtained results indicated our approach has provided a recognition rate approximate 77%.

목차

Abstract
 1. Introduction
 2. Fractal Features for Emotion Recognition from Speech
 3. Fractal Features Extraction
 4. Emotion Recognition from Speech using Fractal Dimension Features
 5. Conclusion
 Acknowledgements
 References

저자정보

  • Jun-Seok Park Dept. of Computer Information & Communication Engineering, Sangmyung University, Korea
  • Soo-Hong Kim Dept. of Computer Software Engineering, Sangmyung University, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.