원문정보
CLIP-based EEG Encoding for Brainwave-driven Image Generation : A Preliminary Study
초록
영어
Recent advances in generative modeling have sparked growing interest in image generation from electroencephalography (EEG) signals. A critical yet technically challenging component of this task lies in effectively encoding EEG signals to capture semantic information corresponding to visual stimuli. In this preliminary study, we investigate the feasibility of employing CLIP (Contrastive Language–Image Pre-training), a state-of-the-art pretrained multimodal contrastive learning model, to semantically align EEG representations with image and caption feature vectors. Our analysis explores the potential of CLIP-based EEG encoding as a foundation for brain-to-image generation systems.
목차
1. 서론
2. 방법
2.1 EEG로부터 특징을 추출하기 위한 모델
2.2 CLIP을 활용한 EEG-Image 의미적 정렬
3. 실험방법
3.1 데이터셋
3.2 실험 환경
4. 실험 결과
5. 결론
Acknowledgement
참고문헌
