원문정보
초록
영어
Effective emotional support conversations demand nuanced, multi-turn interactions that adaptively employ context-sensitive strategies—an area where large language models (LLMs) often fall short despite their strong general capabilities. To address this gap, we propose a multi-task learning framework that jointly fine-tunes a lightweight DialoGPT model to generate supportive responses and predict the support strategy stage. Using uncertainty-based loss weighting, our method dynamically adjusts multi-task learning objectives based on task-specific uncertainty, enabling balanced optimization between generation and classification tasks. Experiments on the psychologically grounded ESConv dataset show significant improvements, achieving an accuracy of 86.4% and a weighted F1 score of 0.86 in the next-stage strategy prediction task, with particularly strong performance in early dialogue phases such as Exploration. Our study demonstrates that compact LLMs, when guided by task-specific supervision, can effectively deliver strategy-aware emotional support, advancing scalable and reliable mental health conversational agents.
목차
1. INTRODUCTION
2. RELATED WORK
2.1 Emotional Support Conversation
2.2 Conversational Agent for Mental Health Support
2.3 Strategy Planning in Multi-turn Conversations
3. METHODS
3.1 Dataset
3.2 Model Variants
3.3 Experiments
4. RESULTS
5. DISCUSSION
6. CONCLUSION
Acknowledgement
References
