원문정보
초록
영어
The enactment of Republic Act 11106 establishes Filipino Sign Language (FSL) as the primary mode of communication within the deaf-mute community in the Philippines. However, this legal recognition has highlighted a significant communication gap between the deaf-mute and non-deaf-mute populations, as the latter typically does not understand FSL. This study introduces a mobile application designed to bridge this gap by translating FSL gestures into textual sentences. The application leverages a CNN-BiLSTM deep learning architecture integrated with Mistral 7B, a state-of-the-art Large Language Model (LLM), to recognize continuous multi-sign gestures and translate them into coherent text. To evaluate the system’s effectiveness, two gesture recognition models were compared based on Word Error Rate (WER), calculated using the Levenshtein distance to measure word-level discrepancies. The 1080p30 model with a stride of 5 and a window size of 30 frames achieved a WER of 27.02%, while the 720p60 model achieved, with a stride of 5 and a window size of 60 frames, a WER of 43.37%. The superior performance of the 1080p30 model is attributed to its higher spatial resolution. This research addresses the critical need for accessible communication tools, offering a solution that enhances inclusivity for the Filipino deaf community.
목차
I. INTRODUCTION
II. SOLUTION
III. RESULTS
A. Static Sign Language Recognition Performance
B. Continuous Sign Language Recognition Performance
IV. SUMMARY
REFERENCES
