원문정보
초록
영어
Among individuals who have difficulty phonating due to laryngectomy or voice disorders, the need for silentspeech- based communication technologies is steadily increasing. Recent studies reconstruct acoustic speech from silent speech by extracting audio features from electromyography (EMG) signals with a transduction model, aligning these features with those from phonated speech, and decoding the aligned representations. Speech generated by this approach typically contains substantial noise and exhibits weak articulation and indistinct phonation. In addition, because speaker-specific voice information is modeled as a whole rather than disentangled, personalized adaptation is difficult. To improve the naturalness and articulation of synthesized speech, we adopt Diff-HierVC, a diffusion-based hierarchical voice conversion architecture, and modify the original design, which predicted targets using only phonated speech, so that target acoustic representations are predicted from EMG signals. We train the model with three disentangled features: content (w2v), mel-spectrogram, and pitch (f0), enabling voice conversion for silent speech. We also compare it with a baseline model that does not use Diff-HierVC in a listening test. The results show that the proposed model significantly improves perceived speech naturalness over the baseline.
목차
I. INTRODUCTION
II. ARCHITECTURE OF THE PROPOSED METHOD
A. Pitch/Mel-spectrogram feature
B. Contents feature
C. Convolution Block
D. CBAM(Convolutional Block Attention Module)
E. Transformer Encoder
III. RESULTS
IV. CONCLUSION
ACKNOWLEDGMENT
REFERENCES
