earticle

논문검색

Technology Convergence (TC)

Interplay of Lexical and Vocal Cues in Emotional Voice Perception

초록

영어

This study examined listeners’ ability to detect emotion from various speech samples, including both spontaneous conversations and actor-posed speech. We investigated the interaction between lexical content and vocal cues (i.e., tone of voice) and how these factors influence emotion perception from voice. Our approach involved two experimental conditions. In the text condition, participants assessed emotional attributes solely from written transcripts, devoid of vocal information. In the voice condition, participants evaluated emotions by listening to audio recordings. Results indicated that vocal cues generally enhanced emotional expressions, overriding lexical meaning for certain negative emotions in spontaneous speech. Correlation analysis revealed that lexical meanings suggesting anger or hostility could be perceived as positive affective states in some spontaneous speech when vocal cues were present. In perceiving sadness or anger in posed speech, much higher correct response rates were obtained in the voice condition compared to the text condition, indicating successful use of vocal cues alongside lexical meaning. However, when identifying happiness in posed speech, listeners showed little improvement from vocal cues. The correlation between acoustic parameters and emotional ratings was analyzed to understand how listeners utilized vocal cues. The vocal cues with the highest correlations were pitch and harmonicity to noise ratio. Listeners perceived angry and happy voices as pitch values increased, but distinguished happiness from anger when the voice contained more noise compared to harmonicity. Sad or timid voices were perceived as pitch values and variation decreased. Our research highlights the complex interaction between lexical content and vocal cues in emotional communication.

목차

Abstract
1. INTRODUCTION
2. EXPERIMENTS
2.1 Stimuli
2.2 Participants
2.3 Procedure
2.4 Acoustic Parameters and Non/Para-linguistic Information
3. RESULTS
3.1 Evaluation of Emotional Attributes
3.2 Relationship between Perceived Emotions in Text and Voice Conditions
3.3 Correct Recognition of Intended Emotions from Posed Speech
3.4 Effects of Acoustic Parameters on Emotion Perception
4. CONCLUSIONS & DISCUSSIONS
ACKNOWLEDGEMENT
REFERENCES

저자정보

  • Eunmi Oh Dept. of Psychology, Yonsei Univ., Korea
  • Jinsun Suhr Dept. of Psychology, Yonsei Univ., Korea
  • Juhyun Jay Lee Dept. of Psychology, Yonsei Univ., Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.